Skip to main content

Azure OpenAI in Microsoft Foundry Models v1 REST API reference

Only a subset of operations are currently supported with the v1 API. To learn more, see the API version lifecycle guide.

Create chat completion

POST {endpoint}/openai/v1/chat/completions
Creates a chat completion.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Microsoft Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
audioobjectParameters for audio output. Required when audio output is requested with
modalities: ["audio"].
No
└─ formatenumSpecifies the output audio format. Must be one of wav, mp3, flac,
opus, or pcm16.
Possible values: wav, aac, mp3, flac, opus, pcm16
No
└─ voiceobjectNo
data_sourcesarrayThe data sources to use for the On Your Data feature, exclusive to Azure OpenAI.No
frequency_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on
their existing frequency in the text so far, decreasing the model’s
likelihood to repeat the same line verbatim.
No0
function_callenumSpecifying a particular function via {"name": "my_function"} forces the model to call that function.
Possible values: none, auto
No
functionsarrayDeprecated in favor of tools.

A list of functions the model may generate JSON inputs for.
No
logit_biasobjectModify the likelihood of specified tokens appearing in the completion.

Accepts a JSON object that maps tokens (specified by their token ID in the
tokenizer) to an associated bias value from -100 to 100. Mathematically,
the bias is added to the logits generated by the model prior to sampling.
The exact effect will vary per model, but values between -1 and 1 should
decrease or increase likelihood of selection; values like -100 or 100
should result in a ban or exclusive selection of the relevant token.
NoNone
logprobsbooleanWhether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the
content of message.
NoFalse
max_completion_tokensintegerAn upper bound for the number of tokens that can be generated for a
completion, including visible output tokens and reasoning tokens.
No
max_tokensintegerThe maximum number of tokens that can be generated in the chat completion.
This value can be used to control costs for text generated via API.

This value is now deprecated in favor of max_completion_tokens, and is
not compatible with o1 series models.
No
messagesarrayA list of messages comprising the conversation so far. Depending on the
model you use, different message types (modalities) are supported,
like text, images, and audio.
Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modalitiesobjectOutput types that you would like the model to generate.
Most models are capable of generating text, which is the default:

["text"]

The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate
both text and audio responses, you can use:

["text", "audio"]
No
modelstringThe model deployment identifier to use for the chat completion request.Yes
nintegerHow many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.No1
parallel_tool_callsobjectWhether to enable parallel function calling during tool use.No
predictionobjectBase representation of predicted output from a model.No
└─ typeOpenAI.ChatOutputPredictionTypeNo
presence_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on
whether they appear in the text so far, increasing the model’s likelihood
to talk about new topics.
No0
reasoning_effortobjectreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
response_formatobjectNo
└─ typeenum
Possible values: text, json_object, json_schema
No
seedintegerThis feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
No
stopobjectNot supported with latest reasoning models o3 and o4-mini.

Up to 4 sequences where the API will stop generating further tokens. The
returned text will not contain the stop sequence.
No
storebooleanWhether or not to store the output of this chat completion request for
use in model distillation or evals products.
NoFalse
streambooleanIf set to true, the model response data will be streamed to the client
as it is generated using server-sent events.
NoFalse
stream_optionsobjectOptions for streaming response. Only set this when you set stream: true.No
└─ include_usagebooleanIf set, an additional chunk will be streamed before the data: [DONE]
message. The usage field on this chunk shows the token usage statistics
for the entire request, and the choices field will always be an empty
array.

All other chunks will also include a usage field, but with a null
value. NOTE: If the stream is interrupted, you may not receive the
final usage chunk which contains the total token usage for the request.
No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No1
tool_choiceOpenAI.ChatCompletionToolChoiceOptionControls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

none is the default when no tools are present. auto is the default if tools are present.
No
toolsarrayA list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No1
userstringA unique identifier representing your end-user, which can help to
monitor and detect abuse.
No
user_security_contextAzureUserSecurityContextUser security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAzureCreateChatCompletionResponse
text/event-streamAzureCreateChatCompletionStreamResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Examples

Example

Creates a completion for the provided prompt, parameters and chosen model.
POST {endpoint}/openai/v1/chat/completions

{
 "model": "gpt-4o-mini",
 "messages": [
  {
   "role": "system",
   "content": "you are a helpful assistant that talks like a pirate"
  },
  {
   "role": "user",
   "content": "can you tell me how to care for a parrot?"
  }
 ]
}

Responses: Status Code: 200
{
  "body": {
    "id": "chatcmpl-7R1nGnsXO8n4oi9UPz2f3UHdgAYMn",
    "created": 1686676106,
    "choices": [
      {
        "index": 0,
        "finish_reason": "stop",
        "message": {
          "role": "assistant",
          "content": "Ahoy matey! So ye be wantin' to care for a fine squawkin' parrot, eh?..."
        }
      }
    ],
    "usage": {
      "completion_tokens": 557,
      "prompt_tokens": 33,
      "total_tokens": 590
    }
  }
}

Create embedding

POST {endpoint}/openai/v1/embeddings
Creates an embedding vector representing the input text.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
dimensionsintegerThe number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.No
encoding_formatenumThe format to return the embeddings in. Can be either float or base64.
Possible values: float, base64
No
inputstring or arrayYes
modelstringThe model to use for the embedding request.Yes
userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.CreateEmbeddingResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Examples

Example

Return the embeddings for a given prompt.
POST {endpoint}/openai/v1/embeddings

{
 "model": "text-embedding-ada-002",
 "input": [
  "this is a test"
 ]
}

Responses: Status Code: 200
{
  "body": {
    "data": [
      {
        "index": 0,
        "embedding": [
          -0.012838088,
          -0.007421397,
          -0.017617522,
          -0.028278312,
          -0.018666342,
          0.01737855,
          -0.01821495,
          -0.006950092,
          -0.009937238,
          -0.038580645,
          0.010674067,
          0.02412286,
          -0.013647936,
          0.013189907,
          0.0021125758,
          0.012406612,
          0.020790534,
          0.00074595667,
          0.008397198,
          -0.00535031,
          0.008968075,
          0.014351576,
          -0.014086051,
          0.015055214,
          -0.022211088,
          -0.025198232,
          0.0065186154,
          -0.036350243,
          0.009180495,
          -0.009698266,
          0.009446018,
          -0.008463579,
          -0.0040426035,
          -0.03443847,
          -0.00091273896,
          -0.0019217303,
          0.002349888,
          -0.021560553,
          0.016515596,
          -0.015572986,
          0.0038666942,
          -8.432463e-05,
          0.0032178196,
          -0.020365695,
          -0.009631885,
          -0.007647093,
          0.0033837722,
          -0.026764825,
          -0.010501476,
          0.020219658,
          0.024640633,
          -0.0066912062,
          -0.036456455,
          -0.0040923897,
          -0.013966565,
          0.017816665,
          0.005366905,
          0.022835068,
          0.0103488,
          -0.0010811808,
          -0.028942121,
          0.0074280356,
          -0.017033368,
          0.0074877786,
          0.021640211,
          0.002499245,
          0.013316032,
          0.0021524043,
          0.010129742,
          0.0054731146,
          0.03143805,
          0.014856071,
          0.0023366117,
          -0.0008243692,
          0.022781964,
          0.003038591,
          -0.017617522,
          0.0013309394,
          0.0022154662,
          0.00097414135,
          0.012041516,
          -0.027906578,
          -0.023817508,
          0.013302756,
          -0.003003741,
          -0.006890349,
          0.0016744611
        ]
      }
    ],
    "usage": {
      "prompt_tokens": 4,
      "total_tokens": 4
    }
  }
}

List evals

GET {endpoint}/openai/v1/evals
List evaluations for a project.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
afterqueryNostringIdentifier for the last eval from the previous pagination request.
limitqueryNointegerA limit on the number of evals to be returned in a single pagination response.
orderqueryNostring
Possible values: asc, desc
Sort order for evals by timestamp. Use asc for ascending order or
desc for descending order.
order_byqueryNostring
Possible values: created_at, updated_at
Evals can be ordered by creation time or last updated time. Use
created_at for creation time or updated_at for last updated
time.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.EvalList
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Create eval

POST {endpoint}/openai/v1/evals
Create the structure of an evaluation that can be used to test a model’s performance. An evaluation is a set of testing criteria and a datasource. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
data_source_configobjectYes
└─ typeOpenAI.EvalDataSourceConfigTypeNo
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the evaluation.No
statusCodeenum
Possible values: 201
Yes
testing_criteriaarrayA list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model’s output, use the sample namespace (ie, {{sample.output_text}}).Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.Eval
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Get eval

GET {endpoint}/openai/v1/evals/{eval_id}
Retrieve an evaluation by its ID. Retrieves an evaluation by its ID.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.Eval
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Update eval

POST {endpoint}/openai/v1/evals/{eval_id}
Update select, mutable properties of a specified evaluation.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
metadataOpenAI.MetadataPropertyForRequestSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringNo

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.Eval
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Delete eval

DELETE {endpoint}/openai/v1/evals/{eval_id}
Delete a specified evaluation.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Get eval runs

GET {endpoint}/openai/v1/evals/{eval_id}/runs
Retrieve a list of runs for a specified evaluation.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring
afterqueryNostring
limitqueryNointeger
orderqueryNostring
Possible values: asc, desc
statusqueryNostring
Possible values: queued, in_progress, completed, canceled, failed

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.EvalRunList
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Create eval run

POST {endpoint}/openai/v1/evals/{eval_id}/runs
Create a new evaluation run, beginning the grading process.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
data_sourceobjectYes
└─ typeOpenAI.EvalRunDataSourceTypeNo
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the run.No

Responses

Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonOpenAI.EvalRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Get eval run

GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Retrieve a specific evaluation run by its ID.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring
run_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.EvalRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Cancel eval run

POST {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Cancel a specific evaluation run by its ID.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring
run_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.EvalRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Delete eval run

DELETE {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}
Delete a specific evaluation run by its ID.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring
run_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Get eval run output items

GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}/output_items
Get a list of output items for a specified evaluation run.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring
run_idpathYesstring
afterqueryNostring
limitqueryNointeger
statusqueryNostring
Possible values: fail, pass
orderqueryNostring
Possible values: asc, desc

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.EvalRunOutputItemList
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Get eval run output item

GET {endpoint}/openai/v1/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}
Retrieve a specific output item from an evaluation run by its ID.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-evalsheaderYesstring
Possible values: preview
Enables access to AOAI Evals, a preview feature.
This feature requires the ‘aoai-evals’ header to be set to ‘preview’.
eval_idpathYesstring
run_idpathYesstring
output_item_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.EvalRunOutputItem
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Create file

POST {endpoint}/openai/v1/files

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: multipart/form-data
NameTypeDescriptionRequiredDefault
expires_afterobjectYes
└─ anchorAzureFileExpiryAnchorNo
└─ secondsintegerNo
filestringYes
purposeenumThe intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - evals: Used for eval data sets
Possible values: assistants, batch, fine-tune, evals
Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAzureOpenAIFile
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Examples

Example

POST {endpoint}/openai/v1/files

List files

GET {endpoint}/openai/v1/files

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
purposequeryNostring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAzureListFilesResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Retrieve file

GET {endpoint}/openai/v1/files/{file_id}

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
file_idpathYesstringThe ID of the file to use for this request.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAzureOpenAIFile
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Delete file

DELETE {endpoint}/openai/v1/files/{file_id}

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
file_idpathYesstringThe ID of the file to use for this request.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.DeleteFileResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Download file

GET {endpoint}/openai/v1/files/{file_id}/content

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
file_idpathYesstringThe ID of the file to use for this request.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/octet-streamstring
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Run grader

POST {endpoint}/openai/v1/fine_tuning/alpha/graders/run
Run a grader.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
graderobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.Yes
└─ calculate_outputstringA formula to calculate the output based on grader results.No
└─ evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ gradersobjectNo
└─ image_tagstringThe image tag to use for the python script.No
└─ inputarrayThe input text. This may include template strings.No
└─ modelstringThe model to use for the evaluation.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsThe sampling parameters for the model.No
└─ sourcestringThe source code of the python script.No
└─ typeenumThe object type, which is always multi.
Possible values: multi
No
itemThe dataset item provided to the grader. This will be used to populate
the item namespace. See the guide for more details.
No
model_samplestringThe model sample to be evaluated. This value will be used to populate
the sample namespace. See the guide for more details.
The output_json variable will be populated if the model sample is a
valid JSON string.
Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.RunGraderResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Validate grader

POST {endpoint}/openai/v1/fine_tuning/alpha/graders/validate
Validate a grader.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
graderobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.Yes
└─ calculate_outputstringA formula to calculate the output based on grader results.No
└─ evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ gradersobjectNo
└─ image_tagstringThe image tag to use for the python script.No
└─ inputarrayThe input text. This may include template strings.No
└─ modelstringThe model to use for the evaluation.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsThe sampling parameters for the model.No
└─ sourcestringThe source code of the python script.No
└─ typeenumThe object type, which is always multi.
Possible values: multi
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ValidateGraderResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Create fine-tuning job

POST {endpoint}/openai/v1/fine_tuning/jobs
Creates a fine-tuning job which begins the process of creating a new model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. Learn more about fine-tuning

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
hyperparametersobjectThe hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of method, and should be passed in under the method parameter.
No
└─ batch_sizeenum
Possible values: auto
No
└─ learning_rate_multiplierenum
Possible values: auto
No
└─ n_epochsenum
Possible values: auto
No
integrationsarrayA list of integrations to enable for your fine-tuning job.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
methodOpenAI.FineTuneMethodThe method used for fine-tuning.No
modelstring (see valid models below)The name of the model to fine-tune. You can select one of the
supported models.
Yes
seedintegerThe seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.
If a seed is not specified, one will be generated for you.
No
suffixstringA string of up to 64 characters that will be added to your fine-tuned model name.

For example, a suffix of “custom-model-name” would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel.
NoNone
training_filestringThe ID of an uploaded file that contains training data.

See upload file for how to upload a file.

Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.

The contents of the file should differ depending on if the model uses the chat, or if the fine-tuning method uses the preference format.

See the fine-tuning guide for more details.
Yes
validation_filestringThe ID of an uploaded file that contains validation data.

If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed in
the fine-tuning results file.
The same data should not be present in both train and validation files.

Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.

See the fine-tuning guide for more details.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List paginated fine-tuning jobs

GET {endpoint}/openai/v1/fine_tuning/jobs
List your organization’s fine-tuning jobs

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
afterqueryNostringIdentifier for the last job from the previous pagination request.
limitqueryNointegerNumber of fine-tuning jobs to retrieve.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListPaginatedFineTuningJobsResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Retrieve fine-tuning job

GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}
Get info about a fine-tuning job. Learn more about fine-tuning

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Cancel fine-tuning job

POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/cancel
Immediately cancel a fine-tune job.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to cancel.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List fine-tuning job checkpoints

GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints
List the checkpoints for a fine-tuning job.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to get checkpoints for.
afterqueryNostringIdentifier for the last checkpoint ID from the previous pagination request.
limitqueryNointegerNumber of checkpoints to retrieve.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListFineTuningJobCheckpointsResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Fine-tuning - Copy checkpoint

POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_name}/copy
Creates a copy of a fine-tuning checkpoint at the given destination account and region.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-copy-ft-checkpointsheaderYesstring
Possible values: preview
Enables access to checkpoint copy operations for models, an AOAI preview feature.
This feature requires the ‘aoai-copy-ft-checkpoints’ header to be set to ‘preview’.
acceptheaderYesstring
Possible values: application/json
fine_tuning_job_idpathYesstring
fine_tuning_checkpoint_namepathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
destinationResourceIdstringThe ID of the destination Resource to copy.Yes
regionstringThe region to copy the model to.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonCopyModelResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Fine-tuning - Get checkpoint

GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints/{fine_tuning_checkpoint_name}/copy
Gets the status of a fine-tuning checkpoint copy.
This Azure OpenAI operation is in preview and subject to change.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
aoai-copy-ft-checkpointsheaderYesstring
Possible values: preview
Enables access to checkpoint copy operations for models, an AOAI preview feature.
This feature requires the ‘aoai-copy-ft-checkpoints’ header to be set to ‘preview’.
acceptheaderYesstring
Possible values: application/json
fine_tuning_job_idpathYesstring
fine_tuning_checkpoint_namepathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonCopyModelResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List fine-tuning events

GET {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/events
Get status updates for a fine-tuning job.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to get events for.
afterqueryNostringIdentifier for the last event from the previous pagination request.
limitqueryNointegerNumber of events to retrieve.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListFineTuningJobEventsResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Pause fine-tuning job

POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/pause
Pause a fine-tune job.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to pause.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Resume fine-tuning job

POST {endpoint}/openai/v1/fine_tuning/jobs/{fine_tuning_job_id}/resume
Resume a paused fine-tune job.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to resume.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List models

GET {endpoint}/openai/v1/models
Lists the currently available models, and provides basic information about each one such as the owner and availability.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListModelsResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Retrieve model

GET {endpoint}/openai/v1/models/{model}
Retrieves a model instance, providing basic information about the model such as the owner and permissioning.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
modelpathYesstringThe ID of the model to use for this request.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.Model
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Create response

POST {endpoint}/openai/v1/responses
Creates a model response.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
includearraySpecify additional output data to include in the model response. Currently
supported values are:
- code_interpreter_call.outputs: Includes the outputs of python code execution
in code interpreter tool call items.
- computer_call_output.output.image_url: Include image urls from the computer call output.
- file_search_call.results: Include the search results of
the file search tool call.
- message.input_image.image_url: Include image urls from the input message.
- message.output_text.logprobs: Include logprobs with assistant messages.
- reasoning.encrypted_content: Includes an encrypted version of reasoning
tokens in reasoning item outputs. This enables reasoning items to be used in
multi-turn conversations when using the Responses API statelessly (like
when the store parameter is set to false, or when an organization is
enrolled in the zero data retention program).
No
inputstring or arrayNo
instructionsstringA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modelstringThe model deployment to use for the creation of this response.Yes
parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
promptobjectReference to a prompt template and its variables.
No
└─ idstringThe unique identifier of the prompt template to use.No
└─ variablesOpenAI.ResponsePromptVariablesOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
└─ versionstringOptional version of the prompt template.No
reasoningobjectreasoning models only

Configuration options for
reasoning models.
No
└─ effortOpenAI.ReasoningEffortreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
└─ generate_summaryenumDeprecated: use summary instead.

A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
└─ summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
storebooleanWhether to store the generated model response for later retrieval via
API.
NoTrue
streambooleanIf set to true, the model response data will be streamed to the client
as it is generated using server-sent events.
See the Streaming section below
for more information.
NoFalse
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No1
textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
tool_choiceobjectControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
No
└─ typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response..No
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like file search.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code.
No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No1
truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAzureResponse
text/event-streamOpenAI.ResponseStreamEvent
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Examples

Example

Create a model response
POST {endpoint}/openai/v1/responses

Get response

GET {endpoint}/openai/v1/responses/{response_id}
Retrieves a model response with the given ID.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
response_idpathYesstring
include_obfuscationqueryNobooleanWhen true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an obfuscation field on streaming delta events to normalize payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are included by default, but add a small amount of overhead to the data stream. You can set include_obfuscation to false to optimize for bandwidth if you trust the network links between your application and the OpenAI API.
include[]queryNoarray

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAzureResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Delete response

DELETE {endpoint}/openai/v1/responses/{response_id}
Deletes a response by ID.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
response_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List input items

GET {endpoint}/openai/v1/responses/{response_id}/input_items
Returns a list of input items for a given response.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
response_idpathYesstring
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ResponseItemList
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List vector stores

GET {endpoint}/openai/v1/vector_stores
Returns a list of vector stores.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListVectorStoresResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Createvectorstore

POST {endpoint}/openai/v1/vector_stores
Creates a vector store.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
chunking_strategyobjectThe default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.No
└─ staticOpenAI.StaticChunkingStrategyNo
└─ typeenumAlways static.
Possible values: static
No
expires_afterOpenAI.VectorStoreExpirationAfterThe expiration policy for a vector store.No
file_idsarrayA list of File IDs that the vector store should use. Useful for tools like file_search that can access files.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the vector store.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Examples

Example file not found: ./examples/vector_stores.json

Get vector store

GET {endpoint}/openai/v1/vector_stores/{vector_store_id}
Retrieves a vector store.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store to retrieve.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Modify vector store

POST {endpoint}/openai/v1/vector_stores/{vector_store_id}
Modifies a vector store.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store to modify.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
expires_afterobjectThe expiration policy for a vector store.No
└─ anchorenumAnchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.
Possible values: last_active_at
No
└─ daysintegerThe number of days after the anchor time that the vector store will expire.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the vector store.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Delete vector store

DELETE {endpoint}/openai/v1/vector_stores/{vector_store_id}
Delete a vector store.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store to delete.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.DeleteVectorStoreResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Create vector store file batch

POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches
Create a vector store file batch.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store for which to create a file batch.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
attributesobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
No
chunking_strategyOpenAI.ChunkingStrategyRequestParamThe chunking strategy used to chunk the file(s). If not set, will use the auto strategy.No
file_idsarrayA list of File IDs that the vector store should use. Useful for tools like file_search that can access files.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreFileBatchObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Get vector store file batch

GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}
Retrieves a vector store file batch.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store that the file batch belongs to.
batch_idpathYesstringThe ID of the file batch being retrieved.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreFileBatchObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Cancel vector store file batch

POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel
Cancel a vector store file batch. This attempts to cancel the processing of files in this batch as soon as possible.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store that the file batch belongs to.
batch_idpathYesstringThe ID of the file batch to cancel.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreFileBatchObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List files in vector store batch

GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/file_batches/{batch_id}/files
Returns a list of vector store files in a batch.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store that the file batch belongs to.
batch_idpathYesstringThe ID of the file batch that the files belong to.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.
filterqueryNoFilter by file status. One of in_progress, completed, failed, cancelled.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListVectorStoreFilesResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

List vector store files

GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/files
Returns a list of vector store files.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store that the files belong to.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.
filterqueryNoFilter by file status. One of in_progress, completed, failed, cancelled.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListVectorStoreFilesResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Create vector store file

POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/files
Create a vector store file by attaching a File to a vector store.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store for which to create a File.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
attributesobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
No
chunking_strategyOpenAI.ChunkingStrategyRequestParamThe chunking strategy used to chunk the file(s). If not set, will use the auto strategy.No
file_idstringA File ID that the vector store should use. Useful for tools like file_search that can access files.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreFileObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Get vector store file

GET {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
Retrieves a vector store file.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store that the file belongs to.
file_idpathYesstringThe ID of the file being retrieved.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreFileObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Update vector store file attributes

POST {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstring
file_idpathYesstring

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
attributesobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.VectorStoreFileObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Delete vector store file

DELETE {endpoint}/openai/v1/vector_stores/{vector_store_id}/files/{file_id}
Delete a vector store file. This will remove the file from the vector store but the file itself will not be deleted. To delete the file, use the delete file endpoint.

Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Supported Azure OpenAI endpoints (protocol and hostname, for example: https://aoairesource.openai.azure.com. Replace “aoairesource” with your Azure OpenAI resource name). https://{your-resource-name}.openai.azure.com
api-versionqueryNoThe explicit Foundry Models API version to use for this request.
v1 if not otherwise specified.
vector_store_idpathYesstringThe ID of the vector store that the file belongs to.
file_idpathYesstringThe ID of the file to delete.

Request Header

Use either token based authentication or API key. Authenticating with token based authentication is recommended and more secure.
NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_OpenAI_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://cognitiveservices.azure.com

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://cognitiveservices.azure.com/.default
api-keyTruestringProvide Azure OpenAI API key here

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.DeleteVectorStoreFileResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzureErrorResponse

Components

AzureAIFoundryModelsApiVersion

PropertyValue
Typestring
Valuesv1
preview

AzureChatCompletionResponseMessage

The extended response model component for chat completion response messages on the Azure OpenAI service. This model adds support for chat message context, used by the On Your Data feature for intent, citations, and other information related to retrieval-augmented generation performed.
NameTypeDescriptionRequiredDefault
annotationsarrayAnnotations for the message, when applicable, as when using theweb search tool.No
audioobjectIf the audio output modality is requested, this object contains data
about the audio response from the model. .
No
└─ datastringBase64 encoded audio bytes generated by the model, in the format
specified in the request.
No
└─ expires_atintegerThe Unix timestamp (in seconds) for when this audio response will
no longer be accessible on the server for use in multi-turn
conversations.
No
└─ idstringUnique identifier for this audio response.No
└─ transcriptstringTranscript of the audio generated by the model.No
contentstringThe contents of the message.Yes
contextobjectAn additional property, added to chat completion response messages, produced by the Azure OpenAI service when using
extension behavior. This includes intent and citation information from the On Your Data feature.
No
└─ all_retrieved_documentsobjectSummary information about documents retrieved by the data retrieval operation.No
└─ chunk_idstringThe chunk ID for the citation.No
└─ contentstringThe content of the citation.No
└─ data_source_indexintegerThe index of the data source used for retrieval.No
└─ filepathstringThe file path for the citation.No
└─ filter_reasonenumIf applicable, an indication of why the document was filtered.
Possible values: score, rerank
No
└─ original_search_scorenumberThe original search score for the retrieval.No
└─ rerank_scorenumberThe rerank score for the retrieval.No
└─ search_queriesarrayThe search queries executed to retrieve documents.No
└─ titlestringThe title for the citation.No
└─ urlstringThe URL of the citation.No
└─ citationsarrayThe citations produced by the data retrieval.No
└─ intentstringThe detected intent from the chat history, which is used to carry conversation context between interactionsNo
function_callobjectDeprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.No
└─ argumentsstringNo
└─ namestringNo
reasoning_contentstringAn Azure-specific extension property containing generated reasoning content from supported models.No
refusalstringThe refusal message generated by the model.Yes
roleenumThe role of the author of this message.
Possible values: assistant
Yes
tool_callsChatCompletionMessageToolCallsItemThe tool calls generated by the model, such as function calls.No

AzureChatCompletionStreamResponseDelta

The extended response model for a streaming chat response message on the Azure OpenAI service. This model adds support for chat message context, used by the On Your Data feature for intent, citations, and other information related to retrieval-augmented generation performed.
NameTypeDescriptionRequiredDefault
audioobjectNo
└─ datastringNo
└─ expires_atintegerNo
└─ idstringNo
└─ transcriptstringNo
contentstringThe contents of the chunk message.No
contextobjectAn additional property, added to chat completion response messages, produced by the Azure OpenAI service when using
extension behavior. This includes intent and citation information from the On Your Data feature.
No
└─ all_retrieved_documentsobjectSummary information about documents retrieved by the data retrieval operation.No
└─ chunk_idstringThe chunk ID for the citation.No
└─ contentstringThe content of the citation.No
└─ data_source_indexintegerThe index of the data source used for retrieval.No
└─ filepathstringThe file path for the citation.No
└─ filter_reasonenumIf applicable, an indication of why the document was filtered.
Possible values: score, rerank
No
└─ original_search_scorenumberThe original search score for the retrieval.No
└─ rerank_scorenumberThe rerank score for the retrieval.No
└─ search_queriesarrayThe search queries executed to retrieve documents.No
└─ titlestringThe title for the citation.No
└─ urlstringThe URL of the citation.No
└─ citationsarrayThe citations produced by the data retrieval.No
└─ intentstringThe detected intent from the chat history, which is used to carry conversation context between interactionsNo
function_callobjectDeprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.No
└─ argumentsstringNo
└─ namestringNo
reasoning_contentstringAn Azure-specific extension property containing generated reasoning content from supported models.No
refusalstringThe refusal message generated by the model.No
roleobjectThe role of the author of a messageNo
tool_callsarrayNo

AzureChatDataSource

A representation of configuration data for a single Azure OpenAI chat data source. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI.

Discriminator for AzureChatDataSource

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeobjectYes

AzureChatDataSourceAccessTokenAuthenticationOptions

NameTypeDescriptionRequiredDefault
access_tokenstringYes
typeenum
Possible values: access_token
Yes

AzureChatDataSourceApiKeyAuthenticationOptions

NameTypeDescriptionRequiredDefault
keystringYes
typeenum
Possible values: api_key
Yes

AzureChatDataSourceAuthenticationOptions

Discriminator for AzureChatDataSourceAuthenticationOptions

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeAzureChatDataSourceAuthenticationOptionsTypeYes

AzureChatDataSourceAuthenticationOptionsType

PropertyValue
Typestring
Valuesapi_key
username_and_password
connection_string
key_and_key_id
encoded_api_key
access_token
system_assigned_managed_identity
user_assigned_managed_identity

AzureChatDataSourceConnectionStringAuthenticationOptions

NameTypeDescriptionRequiredDefault
connection_stringstringYes
typeenum
Possible values: connection_string
Yes

AzureChatDataSourceDeploymentNameVectorizationSource

Represents a vectorization source that makes internal service calls against an Azure OpenAI embedding model deployment. In contrast with the endpoint-based vectorization source, a deployment-name-based vectorization source must be part of the same Azure OpenAI resource but can be used even in private networks.
NameTypeDescriptionRequiredDefault
deployment_namestringThe embedding model deployment to use for vectorization. This deployment must exist within the same Azure OpenAI
resource as the model deployment being used for chat completions.
Yes
dimensionsintegerThe number of dimensions to request on embeddings.
Only supported in ‘text-embedding-3’ and later models.
No
typeenumThe type identifier, always ‘deployment_name’ for this vectorization source type.
Possible values: deployment_name
Yes

AzureChatDataSourceEncodedApiKeyAuthenticationOptions

NameTypeDescriptionRequiredDefault
encoded_api_keystringYes
typeenum
Possible values: encoded_api_key
Yes

AzureChatDataSourceEndpointVectorizationSource

Represents a vectorization source that makes public service calls against an Azure OpenAI embedding model deployment.
NameTypeDescriptionRequiredDefault
authenticationobjectYes
└─ access_tokenstringNo
└─ keystringNo
└─ typeenum
Possible values: access_token
No
dimensionsintegerThe number of dimensions to request on embeddings.
Only supported in ‘text-embedding-3’ and later models.
No
endpointstringSpecifies the resource endpoint URL from which embeddings should be retrieved.
It should be in the format of:
https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings.
The api-version query parameter is not allowed.
Yes
typeenumThe type identifier, always ‘endpoint’ for this vectorization source type.
Possible values: endpoint
Yes

AzureChatDataSourceIntegratedVectorizationSource

Represents an integrated vectorization source as defined within the supporting search resource.
NameTypeDescriptionRequiredDefault
typeenumThe type identifier, always ‘integrated’ for this vectorization source type.
Possible values: integrated
Yes

AzureChatDataSourceKeyAndKeyIdAuthenticationOptions

NameTypeDescriptionRequiredDefault
keystringYes
key_idstringYes
typeenum
Possible values: key_and_key_id
Yes

AzureChatDataSourceModelIdVectorizationSource

Represents a vectorization source that makes service calls based on a search service model ID. This source type is currently only supported by Elasticsearch.
NameTypeDescriptionRequiredDefault
model_idstringThe embedding model build ID to use for vectorization.Yes
typeenumThe type identifier, always ‘model_id’ for this vectorization source type.
Possible values: model_id
Yes

AzureChatDataSourceSystemAssignedManagedIdentityAuthenticationOptions

NameTypeDescriptionRequiredDefault
typeenum
Possible values: system_assigned_managed_identity
Yes

AzureChatDataSourceType

PropertyValue
Typestring
Valuesazure_search
azure_cosmos_db
elasticsearch
pinecone
mongo_db

AzureChatDataSourceUserAssignedManagedIdentityAuthenticationOptions

NameTypeDescriptionRequiredDefault
managed_identity_resource_idstringYes
typeenum
Possible values: user_assigned_managed_identity
Yes

AzureChatDataSourceUsernameAndPasswordAuthenticationOptions

NameTypeDescriptionRequiredDefault
passwordstringYes
typeenum
Possible values: username_and_password
Yes
usernamestringYes

AzureChatDataSourceVectorizationSource

A representation of a data vectorization source usable as an embedding resource with a data source.

Discriminator for AzureChatDataSourceVectorizationSource

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeobjectYes

AzureChatDataSourceVectorizationSourceType

PropertyValue
Typestring
Valuesendpoint
deployment_name
model_id
integrated

AzureChatMessageContext

An additional property, added to chat completion response messages, produced by the Azure OpenAI service when using extension behavior. This includes intent and citation information from the On Your Data feature.
NameTypeDescriptionRequiredDefault
all_retrieved_documentsobjectSummary information about documents retrieved by the data retrieval operation.No
└─ chunk_idstringThe chunk ID for the citation.No
└─ contentstringThe content of the citation.No
└─ data_source_indexintegerThe index of the data source used for retrieval.No
└─ filepathstringThe file path for the citation.No
└─ filter_reasonenumIf applicable, an indication of why the document was filtered.
Possible values: score, rerank
No
└─ original_search_scorenumberThe original search score for the retrieval.No
└─ rerank_scorenumberThe rerank score for the retrieval.No
└─ search_queriesarrayThe search queries executed to retrieve documents.No
└─ titlestringThe title for the citation.No
└─ urlstringThe URL of the citation.No
citationsarrayThe citations produced by the data retrieval.No
intentstringThe detected intent from the chat history, which is used to carry conversation context between interactionsNo

AzureContentFilterBlocklistResult

A collection of true/false filtering results for configured custom blocklists.
NameTypeDescriptionRequiredDefault
detailsarrayThe pairs of individual blocklist IDs and whether they resulted in a filtering action.No
filteredbooleanA value indicating whether any of the detailed blocklists resulted in a filtering action.Yes

AzureContentFilterCompletionTextSpan

A representation of a span of completion text as used by Azure OpenAI content filter results.
NameTypeDescriptionRequiredDefault
completion_end_offsetintegerOffset of the first UTF32 code point which is excluded from the span. This field is always equal to completion_start_offset for empty spans. This field is always larger than completion_start_offset for non-empty spans.Yes
completion_start_offsetintegerOffset of the UTF32 code point which begins the span.Yes

AzureContentFilterCompletionTextSpanDetectionResult

NameTypeDescriptionRequiredDefault
detailsarrayDetailed information about the detected completion text spans.Yes
detectedbooleanWhether the labeled content category was detected in the content.Yes
filteredbooleanWhether the content detection resulted in a content filtering action.Yes

AzureContentFilterCustomTopicResult

A collection of true/false filtering results for configured custom topics.
NameTypeDescriptionRequiredDefault
detailsarrayThe pairs of individual topic IDs and whether they are detected.No
filteredbooleanA value indicating whether any of the detailed topics resulted in a filtering action.Yes

AzureContentFilterDetectionResult

A labeled content filter result item that indicates whether the content was detected and whether the content was filtered.
NameTypeDescriptionRequiredDefault
detectedbooleanWhether the labeled content category was detected in the content.Yes
filteredbooleanWhether the content detection resulted in a content filtering action.Yes

AzureContentFilterPersonallyIdentifiableInformationResult

A content filter detection result for Personally Identifiable Information that includes harm extensions.
NameTypeDescriptionRequiredDefault
redacted_textstringThe redacted text with PII information removed or masked.No
sub_categoriesarrayDetailed results for individual PIIHarmSubCategory(s).No

AzureContentFilterResultForChoice

A content filter result for a single response item produced by a generative AI system.
NameTypeDescriptionRequiredDefault
custom_blocklistsobjectA collection of true/false filtering results for configured custom blocklists.No
└─ detailsarrayThe pairs of individual blocklist IDs and whether they resulted in a filtering action.No
└─ filteredbooleanA value indicating whether any of the detailed blocklists resulted in a filtering action.No
custom_topicsobjectA collection of true/false filtering results for configured custom topics.No
└─ detailsarrayThe pairs of individual topic IDs and whether they are detected.No
└─ filteredbooleanA value indicating whether any of the detailed topics resulted in a filtering action.No
errorobjectIf present, details about an error that prevented content filtering from completing its evaluation.No
└─ codeintegerA distinct, machine-readable code associated with the error.No
└─ messagestringA human-readable message associated with the error.No
hateobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No
personally_identifiable_informationobjectA content filter detection result for Personally Identifiable Information that includes harm extensions.No
└─ redacted_textstringThe redacted text with PII information removed or masked.No
└─ sub_categoriesarrayDetailed results for individual PIIHarmSubCategory(s).No
profanityobjectA labeled content filter result item that indicates whether the content was detected and whether the content was
filtered.
No
└─ detectedbooleanWhether the labeled content category was detected in the content.No
└─ filteredbooleanWhether the content detection resulted in a content filtering action.No
protected_material_codeobjectA detection result that describes a match against licensed code or other protected source material.No
└─ citationobjectIf available, the citation details describing the associated license and its location.No
└─ URLstringThe URL associated with the license.No
└─ licensestringThe name or identifier of the license associated with the detection.No
└─ detectedbooleanWhether the labeled content category was detected in the content.No
└─ filteredbooleanWhether the content detection resulted in a content filtering action.No
protected_material_textobjectA labeled content filter result item that indicates whether the content was detected and whether the content was
filtered.
No
└─ detectedbooleanWhether the labeled content category was detected in the content.No
└─ filteredbooleanWhether the content detection resulted in a content filtering action.No
self_harmobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No
sexualobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No
ungrounded_materialAzureContentFilterCompletionTextSpanDetectionResultNo
violenceobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No

AzureContentFilterResultForPrompt

A content filter result associated with a single input prompt item into a generative AI system.
NameTypeDescriptionRequiredDefault
content_filter_resultsobjectThe content filter category details for the result.No
└─ custom_blocklistsobjectA collection of true/false filtering results for configured custom blocklists.No
└─ detailsarrayThe pairs of individual blocklist IDs and whether they resulted in a filtering action.No
└─ filteredbooleanA value indicating whether any of the detailed blocklists resulted in a filtering action.No
└─ custom_topicsobjectA collection of true/false filtering results for configured custom topics.No
└─ detailsarrayThe pairs of individual topic IDs and whether they are detected.No
└─ filteredbooleanA value indicating whether any of the detailed topics resulted in a filtering action.No
└─ errorobjectIf present, details about an error that prevented content filtering from completing its evaluation.No
└─ codeintegerA distinct, machine-readable code associated with the error.No
└─ messagestringA human-readable message associated with the error.No
└─ hateobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No
└─ indirect_attackobjectA labeled content filter result item that indicates whether the content was detected and whether the content was
filtered.
No
└─ detectedbooleanWhether the labeled content category was detected in the content.No
└─ filteredbooleanWhether the content detection resulted in a content filtering action.No
└─ jailbreakobjectA labeled content filter result item that indicates whether the content was detected and whether the content was
filtered.
No
└─ detectedbooleanWhether the labeled content category was detected in the content.No
└─ filteredbooleanWhether the content detection resulted in a content filtering action.No
└─ profanityobjectA labeled content filter result item that indicates whether the content was detected and whether the content was
filtered.
No
└─ detectedbooleanWhether the labeled content category was detected in the content.No
└─ filteredbooleanWhether the content detection resulted in a content filtering action.No
└─ self_harmobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No
└─ sexualobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No
└─ violenceobjectA labeled content filter result item that indicates whether the content was filtered and what the qualitative
severity level of the content was, as evaluated against content filter configuration for the category.
No
└─ filteredbooleanWhether the content severity resulted in a content filtering action.No
└─ severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
No
prompt_indexintegerThe index of the input prompt associated with the accompanying content filter result categories.No

AzureContentFilterSeverityResult

A labeled content filter result item that indicates whether the content was filtered and what the qualitative severity level of the content was, as evaluated against content filter configuration for the category.
NameTypeDescriptionRequiredDefault
filteredbooleanWhether the content severity resulted in a content filtering action.Yes
severityenumThe labeled severity of the content.
Possible values: safe, low, medium, high
Yes

AzureCosmosDBChatDataSource

Represents a data source configuration that will use an Azure CosmosDB resource.
NameTypeDescriptionRequiredDefault
parametersobjectThe parameter information to control the use of the Azure CosmosDB data source.Yes
└─ allow_partial_resultbooleanIf set to true, the system will allow partial search results to be used and the request will fail if all
partial queries fail. If not specified or specified as false, the request will fail if any search query fails.
NoFalse
└─ authenticationAzureChatDataSourceConnectionStringAuthenticationOptionsNo
└─ container_namestringNo
└─ database_namestringNo
└─ embedding_dependencyAzureChatDataSourceVectorizationSourceA representation of a data vectorization source usable as an embedding resource with a data source.No
└─ fields_mappingobjectNo
└─ content_fieldsarrayNo
└─ content_fields_separatorstringNo
└─ filepath_fieldstringNo
└─ title_fieldstringNo
└─ url_fieldstringNo
└─ vector_fieldsarrayNo
└─ in_scopebooleanWhether queries should be restricted to use of the indexed data.No
└─ include_contextsarrayThe output context properties to include on the response.
By default, citations and intent will be requested.
No[‘citations’, ‘intent’]
└─ index_namestringNo
└─ max_search_queriesintegerThe maximum number of rewritten queries that should be sent to the search provider for a single user message.
By default, the system will make an automatic determination.
No
└─ strictnessintegerThe configured strictness of the search relevance filtering.
Higher strictness will increase precision but lower recall of the answer.
No
└─ top_n_documentsintegerThe configured number of documents to feature in the query.No
typeenumThe discriminated type identifier, which is always ‘azure_cosmos_db’.
Possible values: azure_cosmos_db
Yes

AzureCreateChatCompletionRequest

The extended request model for chat completions against the Azure OpenAI service. This adds the ability to provide data sources for the On Your Data feature.
NameTypeDescriptionRequiredDefault
audioobjectParameters for audio output. Required when audio output is requested with
modalities: ["audio"].
No
└─ formatenumSpecifies the output audio format. Must be one of wav, mp3, flac,
opus, or pcm16.
Possible values: wav, aac, mp3, flac, opus, pcm16
No
└─ voiceobjectNo
data_sourcesarrayThe data sources to use for the On Your Data feature, exclusive to Azure OpenAI.No
frequency_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on
their existing frequency in the text so far, decreasing the model’s
likelihood to repeat the same line verbatim.
No0
function_callenumSpecifying a particular function via {"name": "my_function"} forces the model to call that function.
Possible values: none, auto
No
functionsarrayDeprecated in favor of tools.

A list of functions the model may generate JSON inputs for.
No
logit_biasobjectModify the likelihood of specified tokens appearing in the completion.

Accepts a JSON object that maps tokens (specified by their token ID in the
tokenizer) to an associated bias value from -100 to 100. Mathematically,
the bias is added to the logits generated by the model prior to sampling.
The exact effect will vary per model, but values between -1 and 1 should
decrease or increase likelihood of selection; values like -100 or 100
should result in a ban or exclusive selection of the relevant token.
NoNone
logprobsbooleanWhether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the
content of message.
NoFalse
max_completion_tokensintegerAn upper bound for the number of tokens that can be generated for a
completion, including visible output tokens and reasoning tokens.
No
max_tokensintegerThe maximum number of tokens that can be generated in the chat completion.
This value can be used to control costs for text generated via API.

This value is now deprecated in favor of max_completion_tokens, and is
not compatible with o1 series models.
No
messagesarrayA list of messages comprising the conversation so far. Depending on the
model you use, different message types (modalities) are supported,
like text, images, and audio.
Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modalitiesobjectOutput types that you would like the model to generate.
Most models are capable of generating text, which is the default:

["text"]

The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate
both text and audio responses, you can use:

["text", "audio"]
No
modelstringThe model deployment identifier to use for the chat completion request.Yes
nintegerHow many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.No1
parallel_tool_callsobjectWhether to enable parallel function calling during tool use.No
predictionobjectBase representation of predicted output from a model.No
└─ typeOpenAI.ChatOutputPredictionTypeNo
presence_penaltynumberNumber between -2.0 and 2.0. Positive values penalize new tokens based on
whether they appear in the text so far, increasing the model’s likelihood
to talk about new topics.
No0
reasoning_effortobjectreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
response_formatobjectNo
└─ typeenum
Possible values: text, json_object, json_schema
No
seedintegerThis feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.
No
stopobjectNot supported with latest reasoning models o3 and o4-mini.

Up to 4 sequences where the API will stop generating further tokens. The
returned text will not contain the stop sequence.
No
storebooleanWhether or not to store the output of this chat completion request for
use in model distillation or evals products.
NoFalse
streambooleanIf set to true, the model response data will be streamed to the client
as it is generated using server-sent events.
NoFalse
stream_optionsobjectOptions for streaming response. Only set this when you set stream: true.No
└─ include_usagebooleanIf set, an additional chunk will be streamed before the data: [DONE]
message. The usage field on this chunk shows the token usage statistics
for the entire request, and the choices field will always be an empty
array.

All other chunks will also include a usage field, but with a null
value. NOTE: If the stream is interrupted, you may not receive the
final usage chunk which contains the total token usage for the request.
No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No1
tool_choiceOpenAI.ChatCompletionToolChoiceOptionControls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or more tools.
required means the model must call one or more tools.
Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.

none is the default when no tools are present. auto is the default if tools are present.
No
toolsarrayA list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No1
userstringA unique identifier representing your end-user, which can help to
monitor and detect abuse.
No
user_security_contextAzureUserSecurityContextUser security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud.No

AzureCreateChatCompletionResponse

The extended top-level chat completion response model for the Azure OpenAI service. This model adds Responsible AI content filter annotations for prompt input.
NameTypeDescriptionRequiredDefault
choicesarrayYes
createdintegerThe Unix timestamp (in seconds) of when the chat completion was created.Yes
idstringA unique identifier for the chat completion.Yes
modelstringThe model used for the chat completion.Yes
objectenumThe object type, which is always chat.completion.
Possible values: chat.completion
Yes
prompt_filter_resultsarrayThe Responsible AI content filter annotations associated with prompt inputs into chat completions.No
system_fingerprintstringThis fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
No
usageOpenAI.CompletionUsageUsage statistics for the completion request.No

AzureCreateChatCompletionStreamResponse

NameTypeDescriptionRequiredDefault
choicesarrayA list of chat completion choices. Can contain more than one elements if n is greater than 1. Can also be empty for the
last chunk if you set stream_options: {"include_usage": true}.
Yes
content_filter_resultsAzureContentFilterResultForChoiceA content filter result for a single response item produced by a generative AI system.No
createdintegerThe Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.Yes
deltaAzureChatCompletionStreamResponseDeltaThe extended response model for a streaming chat response message on the Azure OpenAI service.
This model adds support for chat message context, used by the On Your Data feature for intent, citations, and other
information related to retrieval-augmented generation performed.
No
idstringA unique identifier for the chat completion. Each chunk has the same ID.Yes
modelstringThe model to generate the completion.Yes
objectenumThe object type, which is always chat.completion.chunk.
Possible values: chat.completion.chunk
Yes
system_fingerprintstringThis fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
No
usageobjectUsage statistics for the completion request.No
└─ completion_tokensintegerNumber of tokens in the generated completion.No0
└─ completion_tokens_detailsobjectBreakdown of tokens used in a completion.No
└─ accepted_prediction_tokensintegerWhen using Predicted Outputs, the number of tokens in the
prediction that appeared in the completion.
No0
└─ audio_tokensintegerAudio input tokens generated by the model.No0
└─ reasoning_tokensintegerTokens generated by the model for reasoning.No0
└─ rejected_prediction_tokensintegerWhen using Predicted Outputs, the number of tokens in the
prediction that did not appear in the completion. However, like
reasoning tokens, these tokens are still counted in the total
completion tokens for purposes of billing, output, and context window
limits.
No0
└─ prompt_tokensintegerNumber of tokens in the prompt.No0
└─ prompt_tokens_detailsobjectBreakdown of tokens used in the prompt.No
└─ audio_tokensintegerAudio input tokens present in the prompt.No0
└─ cached_tokensintegerCached tokens present in the prompt.No0
└─ total_tokensintegerTotal number of tokens used in the request (prompt + completion).No0

AzureCreateEmbeddingRequest

NameTypeDescriptionRequiredDefault
dimensionsintegerThe number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models.No
encoding_formatenumThe format to return the embeddings in. Can be either float or base64.
Possible values: float, base64
No
inputstring or arrayYes
modelstringThe model to use for the embedding request.Yes
userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No

AzureCreateFileRequestMultiPart

NameTypeDescriptionRequiredDefault
expires_afterobjectYes
└─ anchorAzureFileExpiryAnchorNo
└─ secondsintegerNo
filestringYes
purposeenumThe intended purpose of the uploaded file. One of: - assistants: Used in the Assistants API - batch: Used in the Batch API - fine-tune: Used for fine-tuning - evals: Used for eval data sets
Possible values: assistants, batch, fine-tune, evals
Yes

AzureCreateResponse

NameTypeDescriptionRequiredDefault
backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
includearraySpecify additional output data to include in the model response. Currently
supported values are:
- code_interpreter_call.outputs: Includes the outputs of python code execution
in code interpreter tool call items.
- computer_call_output.output.image_url: Include image urls from the computer call output.
- file_search_call.results: Include the search results of
the file search tool call.
- message.input_image.image_url: Include image urls from the input message.
- message.output_text.logprobs: Include logprobs with assistant messages.
- reasoning.encrypted_content: Includes an encrypted version of reasoning
tokens in reasoning item outputs. This enables reasoning items to be used in
multi-turn conversations when using the Responses API statelessly (like
when the store parameter is set to false, or when an organization is
enrolled in the zero data retention program).
No
inputstring or arrayNo
instructionsstringA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modelstringThe model deployment to use for the creation of this response.Yes
parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
promptobjectReference to a prompt template and its variables.
No
└─ idstringThe unique identifier of the prompt template to use.No
└─ variablesOpenAI.ResponsePromptVariablesOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
└─ versionstringOptional version of the prompt template.No
reasoningobjectreasoning models only

Configuration options for
reasoning models.
No
└─ effortOpenAI.ReasoningEffortreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
└─ generate_summaryenumDeprecated: use summary instead.

A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
└─ summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
storebooleanWhether to store the generated model response for later retrieval via
API.
NoTrue
streambooleanIf set to true, the model response data will be streamed to the client
as it is generated using server-sent events.
See the Streaming section below
for more information.
NoFalse
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No1
textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
tool_choiceobjectControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
No
└─ typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response..No
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like file search.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code.
No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No1
truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No

AzureErrorResponse

NameTypeDescriptionRequiredDefault
errorobjectThe error details.No
└─ codestringThe distinct, machine-generated identifier for the error.No
└─ inner_errorNo
└─ messagestringA human-readable message associated with the error.No
└─ paramstringIf applicable, the request input parameter associated with the errorNo
└─ typeenumThe object type, always ‘error.‘
Possible values: error
No

AzureEvalAPICompletionsSamplingParams

NameTypeDescriptionRequiredDefault
parallel_tool_callsbooleanNo
response_formatOpenAI.ResponseTextFormatConfigurationNo
toolsarrayNo

AzureEvalAPIModelSamplingParams

NameTypeDescriptionRequiredDefault
max_tokensintegerThe maximum number of tokens in the generated output.No
reasoning_effortenumControls the level of reasoning effort applied during generation.
Possible values: low, medium, high
No
seedintegerA seed value to initialize the randomness during sampling.No
temperaturenumberA higher temperature increases randomness in the outputs.No
top_pnumberAn alternative to temperature for nucleus sampling; 1.0 includes all tokens.No

AzureEvalAPIResponseSamplingParams

NameTypeDescriptionRequiredDefault
parallel_tool_callsbooleanNo
response_formatOpenAI.ResponseTextFormatConfigurationNo
toolsarrayNo

AzureFileExpiryAnchor

PropertyValue
Typestring
Valuescreated_at

AzureFineTuneReinforcementMethod

NameTypeDescriptionRequiredDefault
graderobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.Yes
└─ calculate_outputstringA formula to calculate the output based on grader results.No
└─ evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ gradersobjectNo
└─ inputarrayThe input text. This may include template strings.No
└─ modelstringThe model to use for the evaluation.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsThe sampling parameters for the model.No
└─ typeenumThe object type, which is always multi.
Possible values: multi
No
hyperparametersOpenAI.FineTuneReinforcementHyperparametersThe hyperparameters used for the reinforcement fine-tuning job.No
response_formatobjectNo
└─ json_schemaobjectJSON Schema for the response formatNo
└─ typeenumType of response format
Possible values: json_schema
No

AzureListFilesResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
first_idstringYes
has_morebooleanYes
last_idstringYes
objectenum
Possible values: list
Yes

AzureOpenAIFile

NameTypeDescriptionRequiredDefault
bytesintegerThe size of the file, in bytes.Yes
created_atintegerThe Unix timestamp (in seconds) for when the file was created.Yes
expires_atintegerThe Unix timestamp (in seconds) for when the file will expire.No
filenamestringThe name of the file.Yes
idstringThe file identifier, which can be referenced in the API endpoints.Yes
objectenumThe object type, which is always file.
Possible values: file
Yes
purposeenumThe intended purpose of the file. Supported values are assistants, assistants_output, batch, batch_output, fine-tune and fine-tune-results.
Possible values: assistants, assistants_output, batch, batch_output, fine-tune, fine-tune-results, evals
Yes
statusenum
Possible values: uploaded, pending, running, processed, error, deleting, deleted
Yes
status_detailsstringDeprecated. For details on why a fine-tuning training file failed validation, see the error field on fine_tuning.job.No

AzurePiiSubCategoryResult

Result details for individual PIIHarmSubCategory(s).
NameTypeDescriptionRequiredDefault
detectedbooleanWhether the labeled content subcategory was detected in the content.Yes
filteredbooleanWhether the content detection resulted in a content filtering action for this subcategory.Yes
redactedbooleanWhether the content was redacted for this subcategory.Yes
sub_categorystringThe PIIHarmSubCategory that was evaluated.Yes

AzureResponse

NameTypeDescriptionRequiredDefault
backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
created_atintegerUnix timestamp (in seconds) of when this Response was created.Yes
errorobjectAn error object returned when the model fails to generate a Response.Yes
└─ codeOpenAI.ResponseErrorCodeThe error code for the response.No
└─ messagestringA human-readable description of the error.No
idstringUnique identifier for this Response.Yes
incomplete_detailsobjectDetails about why the response is incomplete.Yes
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
instructionsstring or arrayYes
max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
modelstringThe model used to generate this response.Yes
objectenumThe object type of this resource - always set to response.
Possible values: response
Yes
outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
Yes
output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.YesTrue
previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
promptobjectReference to a prompt template and its variables.
No
└─ idstringThe unique identifier of the prompt template to use.No
└─ variablesOpenAI.ResponsePromptVariablesOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
└─ versionstringOptional version of the prompt template.No
reasoningobjectreasoning models only

Configuration options for
reasoning models.
No
└─ effortOpenAI.ReasoningEffortreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
└─ generate_summaryenumDeprecated: use summary instead.

A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
└─ summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Yes
textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
tool_choiceobjectControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
No
└─ typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response..No
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
Yes
truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.Yes

AzureSearchChatDataSource

Represents a data source configuration that will use an Azure Search resource.
NameTypeDescriptionRequiredDefault
parametersobjectThe parameter information to control the use of the Azure Search data source.Yes
└─ allow_partial_resultbooleanIf set to true, the system will allow partial search results to be used and the request will fail if all
partial queries fail. If not specified or specified as false, the request will fail if any search query fails.
NoFalse
└─ authenticationobjectNo
└─ access_tokenstringNo
└─ keystringNo
└─ managed_identity_resource_idstringNo
└─ typeenum
Possible values: access_token
No
└─ embedding_dependencyobjectRepresents a vectorization source that makes public service calls against an Azure OpenAI embedding model deployment.No
└─ authenticationAzureChatDataSourceApiKeyAuthenticationOptions or AzureChatDataSourceAccessTokenAuthenticationOptionsThe authentication mechanism to use with the endpoint-based vectorization source.
Endpoint authentication supports API key and access token mechanisms.
No
└─ deployment_namestringThe embedding model deployment to use for vectorization. This deployment must exist within the same Azure OpenAI
resource as the model deployment being used for chat completions.
No
└─ dimensionsintegerThe number of dimensions to request on embeddings.
Only supported in ‘text-embedding-3’ and later models.
No
└─ endpointstringSpecifies the resource endpoint URL from which embeddings should be retrieved.
It should be in the format of:
https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings.
The api-version query parameter is not allowed.
No
└─ typeenumThe type identifier, always ‘integrated’ for this vectorization source type.
Possible values: integrated
No
└─ endpointstringThe absolute endpoint path for the Azure Search resource to use.No
└─ fields_mappingobjectThe field mappings to use with the Azure Search resource.No
└─ content_fieldsarrayThe names of index fields that should be treated as content.No
└─ content_fields_separatorstringThe separator pattern that content fields should use.No
└─ filepath_fieldstringThe name of the index field to use as a filepath.No
└─ image_vector_fieldsarrayThe names of fields that represent image vector data.No
└─ title_fieldstringThe name of the index field to use as a title.No
└─ url_fieldstringThe name of the index field to use as a URL.No
└─ vector_fieldsarrayThe names of fields that represent vector data.No
└─ filterstringA filter to apply to the search.No
└─ in_scopebooleanWhether queries should be restricted to use of the indexed data.No
└─ include_contextsarrayThe output context properties to include on the response.
By default, citations and intent will be requested.
No[‘citations’, ‘intent’]
└─ index_namestringThe name of the index to use, as specified in the Azure Search resource.No
└─ max_search_queriesintegerThe maximum number of rewritten queries that should be sent to the search provider for a single user message.
By default, the system will make an automatic determination.
No
└─ query_typeenumThe query type for the Azure Search resource to use.
Possible values: simple, semantic, vector, vector_simple_hybrid, vector_semantic_hybrid
No
└─ semantic_configurationstringAdditional semantic configuration for the query.No
└─ strictnessintegerThe configured strictness of the search relevance filtering.
Higher strictness will increase precision but lower recall of the answer.
No
└─ top_n_documentsintegerThe configured number of documents to feature in the query.No
typeenumThe discriminated type identifier, which is always ‘azure_search’.
Possible values: azure_search
Yes

AzureUserSecurityContext

User security context contains several parameters that describe the application itself, and the end user that interacts with the application. These fields assist your security operations teams to investigate and mitigate security incidents by providing a comprehensive approach to protecting your AI applications. Learn more about protecting AI applications using Microsoft Defender for Cloud.
NameTypeDescriptionRequiredDefault
application_namestringThe name of the application. Sensitive personal information should not be included in this field.No
end_user_idstringThis identifier is the Microsoft Entra ID (formerly Azure Active Directory) user object ID used to authenticate end-users within the generative AI application. Sensitive personal information should not be included in this field.No
end_user_tenant_idstringThe Microsoft 365 tenant ID the end user belongs to. It’s required when the generative AI application is multitenant.No
source_ipstringCaptures the original client’s IP address.No

ChatCompletionMessageToolCallsItem

The tool calls generated by the model, such as function calls. Array of: OpenAI.ChatCompletionMessageToolCall

CopiedAccountDetails

NameTypeDescriptionRequiredDefault
destinationResourceIdstringThe ID of the destination resource where the model was copied to.Yes
regionstringThe region where the model was copied to.Yes
statusenumThe status of the copy operation.
Possible values: Completed, Failed, InProgress
Yes

CopyModelRequest

NameTypeDescriptionRequiredDefault
destinationResourceIdstringThe ID of the destination Resource to copy.Yes
regionstringThe region to copy the model to.Yes

CopyModelResponse

NameTypeDescriptionRequiredDefault
checkpointedModelNamestringThe ID of the copied model.Yes
copiedAccountDetailsarrayThe ID of the destination resource id where it was copiedYes
fineTuningJobIdstringThe ID of the fine-tuning job that the checkpoint was copied from.Yes

ElasticsearchChatDataSource

NameTypeDescriptionRequiredDefault
parametersobjectThe parameter information to control the use of the Elasticsearch data source.Yes
└─ allow_partial_resultbooleanIf set to true, the system will allow partial search results to be used and the request will fail if all
partial queries fail. If not specified or specified as false, the request will fail if any search query fails.
NoFalse
└─ authenticationobjectNo
└─ encoded_api_keystringNo
└─ keystringNo
└─ key_idstringNo
└─ typeenum
Possible values: encoded_api_key
No
└─ embedding_dependencyAzureChatDataSourceVectorizationSourceA representation of a data vectorization source usable as an embedding resource with a data source.No
└─ endpointstringNo
└─ fields_mappingobjectNo
└─ content_fieldsarrayNo
└─ content_fields_separatorstringNo
└─ filepath_fieldstringNo
└─ title_fieldstringNo
└─ url_fieldstringNo
└─ vector_fieldsarrayNo
└─ in_scopebooleanWhether queries should be restricted to use of the indexed data.No
└─ include_contextsarrayThe output context properties to include on the response.
By default, citations and intent will be requested.
No[‘citations’, ‘intent’]
└─ index_namestringNo
└─ max_search_queriesintegerThe maximum number of rewritten queries that should be sent to the search provider for a single user message.
By default, the system will make an automatic determination.
No
└─ query_typeenum
Possible values: simple, vector
No
└─ strictnessintegerThe configured strictness of the search relevance filtering.
Higher strictness will increase precision but lower recall of the answer.
No
└─ top_n_documentsintegerThe configured number of documents to feature in the query.No
typeenumThe discriminated type identifier, which is always ‘elasticsearch’.
Possible values: elasticsearch
Yes

MongoDBChatDataSource

NameTypeDescriptionRequiredDefault
parametersobjectThe parameter information to control the use of the MongoDB data source.Yes
└─ allow_partial_resultbooleanIf set to true, the system will allow partial search results to be used and the request will fail if all
partial queries fail. If not specified or specified as false, the request will fail if any search query fails.
NoFalse
└─ app_namestringThe name of the MongoDB application.No
└─ authenticationobjectNo
└─ passwordstringNo
└─ typeenum
Possible values: username_and_password
No
└─ usernamestringNo
└─ collection_namestringThe name of the MongoDB collection.No
└─ database_namestringThe name of the MongoDB database.No
└─ embedding_dependencyobjectRepresents a vectorization source that makes public service calls against an Azure OpenAI embedding model deployment.No
└─ authenticationAzureChatDataSourceApiKeyAuthenticationOptions or AzureChatDataSourceAccessTokenAuthenticationOptionsThe authentication mechanism to use with the endpoint-based vectorization source.
Endpoint authentication supports API key and access token mechanisms.
No
└─ deployment_namestringThe embedding model deployment to use for vectorization. This deployment must exist within the same Azure OpenAI
resource as the model deployment being used for chat completions.
No
└─ dimensionsintegerThe number of dimensions to request on embeddings.
Only supported in ‘text-embedding-3’ and later models.
No
└─ endpointstringSpecifies the resource endpoint URL from which embeddings should be retrieved.
It should be in the format of:
https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings.
The api-version query parameter is not allowed.
No
└─ typeenumThe type identifier, always ‘deployment_name’ for this vectorization source type.
Possible values: deployment_name
No
└─ endpointstringThe name of the MongoDB cluster endpoint.No
└─ fields_mappingobjectField mappings to apply to data used by the MongoDB data source.
Note that content and vector field mappings are required for MongoDB.
No
└─ content_fieldsarrayNo
└─ content_fields_separatorstringNo
└─ filepath_fieldstringNo
└─ title_fieldstringNo
└─ url_fieldstringNo
└─ vector_fieldsarrayNo
└─ in_scopebooleanWhether queries should be restricted to use of the indexed data.No
└─ include_contextsarrayThe output context properties to include on the response.
By default, citations and intent will be requested.
No[‘citations’, ‘intent’]
└─ index_namestringThe name of the MongoDB index.No
└─ max_search_queriesintegerThe maximum number of rewritten queries that should be sent to the search provider for a single user message.
By default, the system will make an automatic determination.
No
└─ strictnessintegerThe configured strictness of the search relevance filtering.
Higher strictness will increase precision but lower recall of the answer.
No
└─ top_n_documentsintegerThe configured number of documents to feature in the query.No
typeenumThe discriminated type identifier, which is always ‘mongo_db’.
Possible values: mongo_db
Yes

OpenAI.Annotation

Discriminator for OpenAI.Annotation

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.AnnotationTypeYes

OpenAI.AnnotationFileCitation

A citation to a file.
NameTypeDescriptionRequiredDefault
file_idstringThe ID of the file.Yes
filenamestringThe filename of the file cited.Yes
indexintegerThe index of the file in the list of files.Yes
typeenumThe type of the file citation. Always file_citation.
Possible values: file_citation
Yes

OpenAI.AnnotationFilePath

A path to a file.
NameTypeDescriptionRequiredDefault
file_idstringThe ID of the file.Yes
indexintegerThe index of the file in the list of files.Yes
typeenumThe type of the file path. Always file_path.
Possible values: file_path
Yes

OpenAI.AnnotationType

PropertyValue
Typestring
Valuesfile_citation
url_citation
file_path
container_file_citation

OpenAI.AnnotationUrlCitation

A citation for a web resource used to generate a model response.
NameTypeDescriptionRequiredDefault
end_indexintegerThe index of the last character of the URL citation in the message.Yes
start_indexintegerThe index of the first character of the URL citation in the message.Yes
titlestringThe title of the web resource.Yes
typeenumThe type of the URL citation. Always url_citation.
Possible values: url_citation
Yes
urlstringThe URL of the web resource.Yes

OpenAI.ApproximateLocation

NameTypeDescriptionRequiredDefault
citystringNo
countrystringNo
regionstringNo
timezonestringNo
typeenum
Possible values: approximate
Yes

OpenAI.AutoChunkingStrategyRequestParam

The default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.
NameTypeDescriptionRequiredDefault
typeenumAlways auto.
Possible values: auto
Yes

OpenAI.ChatCompletionFunctionCallOption

Specifying a particular function via {"name": "my_function"} forces the model to call that function.
NameTypeDescriptionRequiredDefault
namestringThe name of the function to call.Yes

OpenAI.ChatCompletionFunctions

NameTypeDescriptionRequiredDefault
descriptionstringA description of what the function does, used by the model to choose when and how to call the function.No
namestringThe name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.Yes
parametersThe parameters the functions accepts, described as a JSON Schema object.
See the JSON Schema reference
for documentation about the format.

Omitting parameters defines a function with an empty parameter list.
No

OpenAI.ChatCompletionMessageAudioChunk

NameTypeDescriptionRequiredDefault
datastringNo
expires_atintegerNo
idstringNo
transcriptstringNo

OpenAI.ChatCompletionMessageToolCall

NameTypeDescriptionRequiredDefault
functionobjectThe function that the model called.Yes
└─ argumentsstringThe arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.No
└─ namestringThe name of the function to call.No
idstringThe ID of the tool call.Yes
typeenumThe type of the tool. Currently, only function is supported.
Possible values: function
Yes

OpenAI.ChatCompletionMessageToolCallChunk

NameTypeDescriptionRequiredDefault
functionobjectNo
└─ argumentsstringThe arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.No
└─ namestringThe name of the function to call.No
idstringThe ID of the tool call.No
indexintegerYes
typeenumThe type of the tool. Currently, only function is supported.
Possible values: function
No

OpenAI.ChatCompletionNamedToolChoice

Specifies a tool the model should use. Use to force the model to call a specific function.
NameTypeDescriptionRequiredDefault
functionobjectYes
└─ namestringThe name of the function to call.No
typeenumThe type of the tool. Currently, only function is supported.
Possible values: function
Yes

OpenAI.ChatCompletionRequestAssistantMessage

Messages sent by the model in response to user messages.
NameTypeDescriptionRequiredDefault
audioobjectData about a previous audio response from the model.No
└─ idstringUnique identifier for a previous audio response from the model.No
contentstring or arrayNo
function_callobjectDeprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.No
└─ argumentsstringNo
└─ namestringNo
namestringAn optional name for the participant. Provides the model information to differentiate between participants of the same role.No
refusalstringThe refusal message by the assistant.No
roleenumThe role of the messages author, in this case assistant.
Possible values: assistant
Yes
tool_callsChatCompletionMessageToolCallsItemThe tool calls generated by the model, such as function calls.No

OpenAI.ChatCompletionRequestAssistantMessageContentPart

NameTypeDescriptionRequiredDefault
refusalstringThe refusal message generated by the model.Yes
textstringThe text content.Yes
typeenumThe type of the content part.
Possible values: refusal
Yes

OpenAI.ChatCompletionRequestDeveloperMessage

Developer-provided instructions that the model should follow, regardless of messages sent by the user. With o1 models and newer, developer messages replace the previous system messages.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
namestringAn optional name for the participant. Provides the model information to differentiate between participants of the same role.No
roleenumThe role of the messages author, in this case developer.
Possible values: developer
Yes

OpenAI.ChatCompletionRequestFunctionMessage

NameTypeDescriptionRequiredDefault
contentstringThe contents of the function message.Yes
namestringThe name of the function to call.Yes
roleenumThe role of the messages author, in this case function.
Possible values: function
Yes

OpenAI.ChatCompletionRequestMessage

Discriminator for OpenAI.ChatCompletionRequestMessage

This component uses the property role to discriminate between different types:
NameTypeDescriptionRequiredDefault
contentstring or arrayNo
roleobjectThe role of the author of a messageYes

OpenAI.ChatCompletionRequestMessageContentPart

Discriminator for OpenAI.ChatCompletionRequestMessageContentPart

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ChatCompletionRequestMessageContentPartTypeYes

OpenAI.ChatCompletionRequestMessageContentPartAudio

NameTypeDescriptionRequiredDefault
input_audioobjectYes
└─ datastringBase64 encoded audio data.No
└─ formatenumThe format of the encoded audio data. Currently supports “wav” and “mp3”.
Possible values: wav, mp3
No
typeenumThe type of the content part. Always input_audio.
Possible values: input_audio
Yes

OpenAI.ChatCompletionRequestMessageContentPartFile

NameTypeDescriptionRequiredDefault
fileobjectYes
└─ file_datastringThe base64 encoded file data, used when passing the file to the model
as a string.
No
└─ file_idstringThe ID of an uploaded file to use as input.No
└─ filenamestringThe name of the file, used when passing the file to the model as a
string.
No
typeenumThe type of the content part. Always file.
Possible values: file
Yes

OpenAI.ChatCompletionRequestMessageContentPartImage

NameTypeDescriptionRequiredDefault
image_urlobjectYes
└─ detailenumSpecifies the detail level of the image.
Possible values: auto, low, high
No
└─ urlstringEither a URL of the image or the base64 encoded image data.No
typeenumThe type of the content part.
Possible values: image_url
Yes

OpenAI.ChatCompletionRequestMessageContentPartRefusal

NameTypeDescriptionRequiredDefault
refusalstringThe refusal message generated by the model.Yes
typeenumThe type of the content part.
Possible values: refusal
Yes

OpenAI.ChatCompletionRequestMessageContentPartText

NameTypeDescriptionRequiredDefault
textstringThe text content.Yes
typeenumThe type of the content part.
Possible values: text
Yes

OpenAI.ChatCompletionRequestMessageContentPartType

PropertyValue
Typestring
Valuestext
file
input_audio
image_url
refusal

OpenAI.ChatCompletionRequestSystemMessage

Developer-provided instructions that the model should follow, regardless of messages sent by the user. With o1 models and newer, use developer messages for this purpose instead.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
namestringAn optional name for the participant. Provides the model information to differentiate between participants of the same role.No
roleenumThe role of the messages author, in this case system.
Possible values: system
Yes

OpenAI.ChatCompletionRequestSystemMessageContentPart

References: OpenAI.ChatCompletionRequestMessageContentPartText

OpenAI.ChatCompletionRequestToolMessage

NameTypeDescriptionRequiredDefault
contentstring or arrayYes
roleenumThe role of the messages author, in this case tool.
Possible values: tool
Yes
tool_call_idstringTool call that this message is responding to.Yes

OpenAI.ChatCompletionRequestToolMessageContentPart

References: OpenAI.ChatCompletionRequestMessageContentPartText

OpenAI.ChatCompletionRequestUserMessage

Messages sent by an end user, containing prompts or additional context information.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
namestringAn optional name for the participant. Provides the model information to differentiate between participants of the same role.No
roleenumThe role of the messages author, in this case user.
Possible values: user
Yes

OpenAI.ChatCompletionRequestUserMessageContentPart

NameTypeDescriptionRequiredDefault
fileobjectYes
└─ file_datastringThe base64 encoded file data, used when passing the file to the model
as a string.
No
└─ file_idstringThe ID of an uploaded file to use as input.No
└─ filenamestringThe name of the file, used when passing the file to the model as a
string.
No
image_urlobjectYes
└─ detailenumSpecifies the detail level of the image.
Possible values: auto, low, high
No
└─ urlstringEither a URL of the image or the base64 encoded image data.No
input_audioobjectYes
└─ datastringBase64 encoded audio data.No
└─ formatenumThe format of the encoded audio data. Currently supports “wav” and “mp3”.
Possible values: wav, mp3
No
textstringThe text content.Yes
typeenumThe type of the content part. Always file.
Possible values: file
Yes

OpenAI.ChatCompletionRole

The role of the author of a message
PropertyValue
DescriptionThe role of the author of a message
Typestring
Valuessystem
developer
user
assistant
tool
function

OpenAI.ChatCompletionStreamOptions

Options for streaming response. Only set this when you set stream: true.
NameTypeDescriptionRequiredDefault
include_usagebooleanIf set, an additional chunk will be streamed before the data: [DONE]
message. The usage field on this chunk shows the token usage statistics
for the entire request, and the choices field will always be an empty
array.

All other chunks will also include a usage field, but with a null
value. NOTE: If the stream is interrupted, you may not receive the
final usage chunk which contains the total token usage for the request.
No

OpenAI.ChatCompletionStreamResponseDelta

A chat completion delta generated by streamed model responses.
NameTypeDescriptionRequiredDefault
audioobjectNo
└─ datastringNo
└─ expires_atintegerNo
└─ idstringNo
└─ transcriptstringNo
contentstringThe contents of the chunk message.No
function_callobjectDeprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.No
└─ argumentsstringNo
└─ namestringNo
refusalstringThe refusal message generated by the model.No
roleobjectThe role of the author of a messageNo
tool_callsarrayNo

OpenAI.ChatCompletionTokenLogprob

NameTypeDescriptionRequiredDefault
bytesarrayA list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.Yes
logprobnumberThe log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.Yes
tokenstringThe token.Yes
top_logprobsarrayList of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.Yes

OpenAI.ChatCompletionTool

NameTypeDescriptionRequiredDefault
functionOpenAI.FunctionObjectYes
typeenumThe type of the tool. Currently, only function is supported.
Possible values: function
Yes

OpenAI.ChatCompletionToolChoiceOption

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.
NameTypeDescriptionRequiredDefault
functionobjectYes
└─ namestringThe name of the function to call.No
typeenumThe type of the tool. Currently, only function is supported.
Possible values: function
Yes

OpenAI.ChatOutputPrediction

Base representation of predicted output from a model.

Discriminator for OpenAI.ChatOutputPrediction

This component uses the property type to discriminate between different types:
Type ValueSchema
contentOpenAI.ChatOutputPredictionContent
NameTypeDescriptionRequiredDefault
typeOpenAI.ChatOutputPredictionTypeYes

OpenAI.ChatOutputPredictionContent

Static predicted output content, such as the content of a text file that is being regenerated.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
typeenumThe type of the predicted content you want to provide. This type is
currently always content.
Possible values: content
Yes

OpenAI.ChatOutputPredictionType

PropertyValue
Typestring
Valuescontent

OpenAI.ChunkingStrategyRequestParam

The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.

Discriminator for OpenAI.ChunkingStrategyRequestParam

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeenumThe type of chunking strategy.
Possible values: auto, static
Yes

OpenAI.ChunkingStrategyResponseParam

Discriminator for OpenAI.ChunkingStrategyResponseParam

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeenum
Possible values: static, other
Yes

OpenAI.CodeInterpreterOutput

Discriminator for OpenAI.CodeInterpreterOutput

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.CodeInterpreterOutputTypeYes

OpenAI.CodeInterpreterOutputImage

The image output from the code interpreter.
NameTypeDescriptionRequiredDefault
typeenumThe type of the output. Always ‘image’.
Possible values: image
Yes
urlstringThe URL of the image output from the code interpreter.Yes

OpenAI.CodeInterpreterOutputLogs

The logs output from the code interpreter.
NameTypeDescriptionRequiredDefault
logsstringThe logs output from the code interpreter.Yes
typeenumThe type of the output. Always ‘logs’.
Possible values: logs
Yes

OpenAI.CodeInterpreterOutputType

PropertyValue
Typestring
Valueslogs
image

OpenAI.CodeInterpreterTool

A tool that runs Python code to help generate a response to a prompt.
NameTypeDescriptionRequiredDefault
containerobjectConfiguration for a code interpreter container. Optionally specify the IDs
of the files to run the code on.
Yes
└─ file_idsarrayAn optional list of uploaded files to make available to your code.No
└─ typeenumAlways auto.
Possible values: auto
No
typeenumThe type of the code interpreter tool. Always code_interpreter.
Possible values: code_interpreter
Yes

OpenAI.CodeInterpreterToolAuto

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
NameTypeDescriptionRequiredDefault
file_idsarrayAn optional list of uploaded files to make available to your code.No
typeenumAlways auto.
Possible values: auto
Yes

OpenAI.CodeInterpreterToolCallItemParam

A tool call to run code.
NameTypeDescriptionRequiredDefault
codestringThe code to run, or null if not available.Yes
container_idstringThe ID of the container used to run the code.Yes
outputsarrayThe outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
Yes
typeenum
Possible values: code_interpreter_call
Yes

OpenAI.CodeInterpreterToolCallItemResource

A tool call to run code.
NameTypeDescriptionRequiredDefault
codestringThe code to run, or null if not available.Yes
container_idstringThe ID of the container used to run the code.Yes
outputsarrayThe outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
Yes
statusenum
Possible values: in_progress, completed, incomplete, interpreting, failed
Yes
typeenum
Possible values: code_interpreter_call
Yes

OpenAI.ComparisonFilter

A filter used to compare a specified attribute key to a given value using a defined comparison operation.
NameTypeDescriptionRequiredDefault
keystringThe key to compare against the value.Yes
typeenumSpecifies the comparison operator: eq, ne, gt, gte, lt, lte.
- eq: equals
- ne: not equal
- gt: greater than
- gte: greater than or equal
- lt: less than
- lte: less than or equal
Possible values: eq, ne, gt, gte, lt, lte
Yes
valuestring or number or booleanYes

OpenAI.CompletionUsage

Usage statistics for the completion request.
NameTypeDescriptionRequiredDefault
completion_tokensintegerNumber of tokens in the generated completion.Yes0
completion_tokens_detailsobjectBreakdown of tokens used in a completion.No
└─ accepted_prediction_tokensintegerWhen using Predicted Outputs, the number of tokens in the
prediction that appeared in the completion.
No0
└─ audio_tokensintegerAudio input tokens generated by the model.No0
└─ reasoning_tokensintegerTokens generated by the model for reasoning.No0
└─ rejected_prediction_tokensintegerWhen using Predicted Outputs, the number of tokens in the
prediction that did not appear in the completion. However, like
reasoning tokens, these tokens are still counted in the total
completion tokens for purposes of billing, output, and context window
limits.
No0
prompt_tokensintegerNumber of tokens in the prompt.Yes0
prompt_tokens_detailsobjectBreakdown of tokens used in the prompt.No
└─ audio_tokensintegerAudio input tokens present in the prompt.No0
└─ cached_tokensintegerCached tokens present in the prompt.No0
total_tokensintegerTotal number of tokens used in the request (prompt + completion).Yes0

OpenAI.CompoundFilter

Combine multiple filters using and or or.
NameTypeDescriptionRequiredDefault
filtersarrayArray of filters to combine. Items can be ComparisonFilter or CompoundFilter.Yes
typeenumType of operation: and or or.
Possible values: and, or
Yes

OpenAI.ComputerAction

Discriminator for OpenAI.ComputerAction

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ComputerActionTypeYes

OpenAI.ComputerActionClick

A click action.
NameTypeDescriptionRequiredDefault
buttonenumIndicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
Possible values: left, right, wheel, back, forward
Yes
typeenumSpecifies the event type. For a click action, this property is
always set to click.
Possible values: click
Yes
xintegerThe x-coordinate where the click occurred.Yes
yintegerThe y-coordinate where the click occurred.Yes

OpenAI.ComputerActionDoubleClick

A double click action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a double click action, this property is
always set to double_click.
Possible values: double_click
Yes
xintegerThe x-coordinate where the double click occurred.Yes
yintegerThe y-coordinate where the double click occurred.Yes

OpenAI.ComputerActionDrag

A drag action.
NameTypeDescriptionRequiredDefault
patharrayAn array of coordinates representing the path of the drag action. Coordinates will appear as an array
of objects, eg
<br />[<br /> { x: 100, y: 200 },<br /> { x: 200, y: 300 }<br />]<br />
Yes
typeenumSpecifies the event type. For a drag action, this property is
always set to drag.
Possible values: drag
Yes

OpenAI.ComputerActionKeyPress

A collection of keypresses the model would like to perform.
NameTypeDescriptionRequiredDefault
keysarrayThe combination of keys the model is requesting to be pressed. This is an
array of strings, each representing a key.
Yes
typeenumSpecifies the event type. For a keypress action, this property is
always set to keypress.
Possible values: keypress
Yes

OpenAI.ComputerActionMove

A mouse move action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a move action, this property is
always set to move.
Possible values: move
Yes
xintegerThe x-coordinate to move to.Yes
yintegerThe y-coordinate to move to.Yes

OpenAI.ComputerActionScreenshot

A screenshot action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a screenshot action, this property is
always set to screenshot.
Possible values: screenshot
Yes

OpenAI.ComputerActionScroll

A scroll action.
NameTypeDescriptionRequiredDefault
scroll_xintegerThe horizontal scroll distance.Yes
scroll_yintegerThe vertical scroll distance.Yes
typeenumSpecifies the event type. For a scroll action, this property is
always set to scroll.
Possible values: scroll
Yes
xintegerThe x-coordinate where the scroll occurred.Yes
yintegerThe y-coordinate where the scroll occurred.Yes

OpenAI.ComputerActionType

PropertyValue
Typestring
Valuesscreenshot
click
double_click
scroll
type
wait
keypress
drag
move

OpenAI.ComputerActionTypeKeys

An action to type in text.
NameTypeDescriptionRequiredDefault
textstringThe text to type.Yes
typeenumSpecifies the event type. For a type action, this property is
always set to type.
Possible values: type
Yes

OpenAI.ComputerActionWait

A wait action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a wait action, this property is
always set to wait.
Possible values: wait
Yes

OpenAI.ComputerToolCallItemParam

A tool call to a computer use tool.
NameTypeDescriptionRequiredDefault
actionOpenAI.ComputerActionYes
call_idstringAn identifier used when responding to the tool call with output.Yes
pending_safety_checksarrayThe pending safety checks for the computer call.Yes
typeenum
Possible values: computer_call
Yes

OpenAI.ComputerToolCallItemResource

A tool call to a computer use tool.
NameTypeDescriptionRequiredDefault
actionOpenAI.ComputerActionYes
call_idstringAn identifier used when responding to the tool call with output.Yes
pending_safety_checksarrayThe pending safety checks for the computer call.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: computer_call
Yes

OpenAI.ComputerToolCallOutputItemOutput

Discriminator for OpenAI.ComputerToolCallOutputItemOutput

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ComputerToolCallOutputItemOutputTypeA computer screenshot image used with the computer use tool.Yes

OpenAI.ComputerToolCallOutputItemOutputComputerScreenshot

NameTypeDescriptionRequiredDefault
file_idstringNo
image_urlstringNo
typeenum
Possible values: computer_screenshot
Yes

OpenAI.ComputerToolCallOutputItemOutputType

A computer screenshot image used with the computer use tool.
PropertyValue
DescriptionA computer screenshot image used with the computer use tool.
Typestring
Valuescomputer_screenshot

OpenAI.ComputerToolCallOutputItemParam

The output of a computer tool call.
NameTypeDescriptionRequiredDefault
acknowledged_safety_checksarrayThe safety checks reported by the API that have been acknowledged by the
developer.
No
call_idstringThe ID of the computer tool call that produced the output.Yes
outputOpenAI.ComputerToolCallOutputItemOutputYes
typeenum
Possible values: computer_call_output
Yes

OpenAI.ComputerToolCallOutputItemResource

The output of a computer tool call.
NameTypeDescriptionRequiredDefault
acknowledged_safety_checksarrayThe safety checks reported by the API that have been acknowledged by the
developer.
No
call_idstringThe ID of the computer tool call that produced the output.Yes
outputOpenAI.ComputerToolCallOutputItemOutputYes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: computer_call_output
Yes

OpenAI.ComputerToolCallSafetyCheck

A pending safety check for the computer call.
NameTypeDescriptionRequiredDefault
codestringThe type of the pending safety check.Yes
idstringThe ID of the pending safety check.Yes
messagestringDetails about the pending safety check.Yes

OpenAI.ComputerUsePreviewTool

A tool that controls a virtual computer.
NameTypeDescriptionRequiredDefault
display_heightintegerThe height of the computer display.Yes
display_widthintegerThe width of the computer display.Yes
environmentenumThe type of computer environment to control.
Possible values: windows, mac, linux, ubuntu, browser
Yes
typeenumThe type of the computer use tool. Always computer_use_preview.
Possible values: computer_use_preview
Yes

OpenAI.Coordinate

An x/y coordinate pair, e.g. { x: 100, y: 200 }.
NameTypeDescriptionRequiredDefault
xintegerThe x-coordinate.Yes
yintegerThe y-coordinate.Yes

OpenAI.CreateEmbeddingResponse

NameTypeDescriptionRequiredDefault
dataarrayThe list of embeddings generated by the model.Yes
modelstringThe name of the model used to generate the embedding.Yes
objectenumThe object type, which is always “list”.
Possible values: list
Yes
usageobjectThe usage information for the request.Yes
└─ prompt_tokensintegerThe number of tokens used by the prompt.No
└─ total_tokensintegerThe total number of tokens used by the request.No

OpenAI.CreateEvalItem

A chat message that makes up the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
NameTypeDescriptionRequiredDefault
contentstring or OpenAI.EvalItemContentText inputs to the model - can contain template strings.Yes
roleenumThe role of the message input. One of user, assistant, system, or
developer.
Possible values: user, assistant, system, developer
Yes
typeenumThe type of the message input. Always message.
Possible values: message
No

OpenAI.CreateEvalRunRequest

NameTypeDescriptionRequiredDefault
data_sourceobjectYes
└─ typeOpenAI.EvalRunDataSourceTypeNo
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the run.No

OpenAI.CreateFineTuningJobRequest

Valid models:
babbage-002
davinci-002
gpt-3.5-turbo
gpt-4o-mini
NameTypeDescriptionRequiredDefault
hyperparametersobjectThe hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of method, and should be passed in under the method parameter.
No
└─ batch_sizeenum
Possible values: auto
No
└─ learning_rate_multiplierenum
Possible values: auto
No
└─ n_epochsenum
Possible values: auto
No
integrationsarrayA list of integrations to enable for your fine-tuning job.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
methodOpenAI.FineTuneMethodThe method used for fine-tuning.No
modelstring (see valid models below)The name of the model to fine-tune. You can select one of the
supported models.
Yes
seedintegerThe seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.
If a seed is not specified, one will be generated for you.
No
suffixstringA string of up to 64 characters that will be added to your fine-tuned model name.

For example, a suffix of “custom-model-name” would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel.
NoNone
training_filestringThe ID of an uploaded file that contains training data.

See upload file for how to upload a file.

Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.

The contents of the file should differ depending on if the model uses the chat, or if the fine-tuning method uses the preference format.

See the fine-tuning guide for more details.
Yes
validation_filestringThe ID of an uploaded file that contains validation data.

If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed in
the fine-tuning results file.
The same data should not be present in both train and validation files.

Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.

See the fine-tuning guide for more details.
No

OpenAI.CreateFineTuningJobRequestIntegration

Discriminator for OpenAI.CreateFineTuningJobRequestIntegration

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typestring (see valid models below)Yes

OpenAI.CreateFineTuningJobRequestWandbIntegration

NameTypeDescriptionRequiredDefault
typeenum
Possible values: wandb
Yes
wandbobjectYes
└─ entitystringNo
└─ namestringNo
└─ projectstringNo
└─ tagsarrayNo

OpenAI.CreateVectorStoreFileBatchRequest

NameTypeDescriptionRequiredDefault
attributesobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
No
chunking_strategyOpenAI.ChunkingStrategyRequestParamThe chunking strategy used to chunk the file(s). If not set, will use the auto strategy.No
file_idsarrayA list of File IDs that the vector store should use. Useful for tools like file_search that can access files.Yes

OpenAI.CreateVectorStoreFileRequest

NameTypeDescriptionRequiredDefault
attributesobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
No
chunking_strategyOpenAI.ChunkingStrategyRequestParamThe chunking strategy used to chunk the file(s). If not set, will use the auto strategy.No
file_idstringA File ID that the vector store should use. Useful for tools like file_search that can access files.Yes

OpenAI.CreateVectorStoreRequest

NameTypeDescriptionRequiredDefault
chunking_strategyobjectThe default strategy. This strategy currently uses a max_chunk_size_tokens of 800 and chunk_overlap_tokens of 400.No
└─ staticOpenAI.StaticChunkingStrategyNo
└─ typeenumAlways static.
Possible values: static
No
expires_afterOpenAI.VectorStoreExpirationAfterThe expiration policy for a vector store.No
file_idsarrayA list of File IDs that the vector store should use. Useful for tools like file_search that can access files.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the vector store.No

OpenAI.DeleteFileResponse

NameTypeDescriptionRequiredDefault
deletedbooleanYes
idstringYes
objectenum
Possible values: file
Yes

OpenAI.DeleteVectorStoreFileResponse

NameTypeDescriptionRequiredDefault
deletedbooleanYes
idstringYes
objectenum
Possible values: vector_store.file.deleted
Yes

OpenAI.DeleteVectorStoreResponse

NameTypeDescriptionRequiredDefault
deletedbooleanYes
idstringYes
objectenum
Possible values: vector_store.deleted
Yes

OpenAI.Embedding

Represents an embedding vector returned by embedding endpoint.
NameTypeDescriptionRequiredDefault
embeddingarray or stringYes
indexintegerThe index of the embedding in the list of embeddings.Yes
objectenumThe object type, which is always “embedding”.
Possible values: embedding
Yes

OpenAI.Eval

An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
  • Improve the quality of my chatbot
  • See how well my chatbot handles customer support
  • Check if o4-mini is better at my usecase than gpt-4o
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the eval was created.Yes
data_source_configobjectYes
└─ typeOpenAI.EvalDataSourceConfigTypeNo
idstringUnique identifier for the evaluation.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
namestringThe name of the evaluation.Yes
objectenumThe object type.
Possible values: eval
Yes
testing_criteriaarrayA list of testing criteria.YesNone

OpenAI.EvalApiError

An object representing an error response from the Eval API.
NameTypeDescriptionRequiredDefault
codestringThe error code.Yes
messagestringThe error message.Yes

OpenAI.EvalCompletionsRunDataSourceParams

A CompletionsRunDataSource object describing a model sampling configuration.
NameTypeDescriptionRequiredDefault
input_messagesobjectNo
└─ item_referencestringA reference to a variable in the item namespace. Ie, “item.input_trajectory”No
└─ templatearrayA list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.No
└─ typeenumThe type of input messages. Always item_reference.
Possible values: item_reference
No
modelstringThe name of the model to use for generating completions (e.g. “o3-mini”).No
sampling_paramsAzureEvalAPICompletionsSamplingParamsNo
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ created_afterintegerAn optional Unix timestamp to filter items created after this time.No
└─ created_beforeintegerAn optional Unix timestamp to filter items created before this time.No
└─ idstringThe identifier of the file.No
└─ limitintegerAn optional maximum number of items to return.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringAn optional model to filter by (e.g., ‘gpt-4o’).No
└─ typeenumThe type of source. Always stored_completions.
Possible values: stored_completions
No
typeenumThe type of run data source. Always completions.
Possible values: completions
Yes

OpenAI.EvalCustomDataSourceConfigParams

A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be:
  • Used to define your testing criteria and
  • What data is required when creating a run
NameTypeDescriptionRequiredDefault
include_sample_schemabooleanWhether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source)NoFalse
item_schemaobjectThe json schema for each row in the data source.Yes
typeenumThe type of data source. Always custom.
Possible values: custom
Yes

OpenAI.EvalCustomDataSourceConfigResource

A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces. The response schema defines the shape of the data that will be:
  • Used to define your testing criteria and
  • What data is required when creating a run
NameTypeDescriptionRequiredDefault
schemaobjectThe json schema for the run data source items.
Learn how to build JSON schemas here.
Yes
typeenumThe type of data source. Always custom.
Possible values: custom
Yes

OpenAI.EvalDataSourceConfigParams

Discriminator for OpenAI.EvalDataSourceConfigParams

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.EvalDataSourceConfigTypeYes

OpenAI.EvalDataSourceConfigResource

Discriminator for OpenAI.EvalDataSourceConfigResource

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.EvalDataSourceConfigTypeYes

OpenAI.EvalDataSourceConfigType

PropertyValue
Typestring
Valuescustom
logs
stored_completions

OpenAI.EvalGraderLabelModelParams

A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
NameTypeDescriptionRequiredDefault
inputarrayA list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.Yes
labelsarrayThe labels to classify to each item in the evaluation.Yes
modelstringThe model to use for the evaluation. Must support structured outputs.Yes
namestringThe name of the grader.Yes
passing_labelsarrayThe labels that indicate a passing result. Must be a subset of labels.Yes
typeenumThe object type, which is always label_model.
Possible values: label_model
Yes

OpenAI.EvalGraderLabelModelResource

NameTypeDescriptionRequiredDefault
inputarrayYes
labelsarrayThe labels to assign to each item in the evaluation.Yes
modelstringThe model to use for the evaluation. Must support structured outputs.Yes
namestringThe name of the grader.Yes
passing_labelsarrayThe labels that indicate a passing result. Must be a subset of labels.Yes
typeenumThe object type, which is always label_model.
Possible values: label_model
Yes

OpenAI.EvalGraderParams

Discriminator for OpenAI.EvalGraderParams

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.GraderTypeYes

OpenAI.EvalGraderPythonParams

NameTypeDescriptionRequiredDefault
image_tagstringThe image tag to use for the python script.No
namestringThe name of the grader.Yes
pass_thresholdnumberThe threshold for the score.No
sourcestringThe source code of the python script.Yes
typeenumThe object type, which is always python.
Possible values: python
Yes

OpenAI.EvalGraderPythonResource

NameTypeDescriptionRequiredDefault
image_tagstringThe image tag to use for the python script.No
namestringThe name of the grader.Yes
pass_thresholdnumberThe threshold for the score.No
sourcestringThe source code of the python script.Yes
typeenumThe object type, which is always python.
Possible values: python
Yes

OpenAI.EvalGraderResource

Discriminator for OpenAI.EvalGraderResource

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.GraderTypeYes

OpenAI.EvalGraderScoreModelParams

NameTypeDescriptionRequiredDefault
inputarrayThe input text. This may include template strings.Yes
modelstringThe model to use for the evaluation.Yes
namestringThe name of the grader.Yes
pass_thresholdnumberThe threshold for the score.No
rangearrayThe range of the score. Defaults to [0, 1].No
sampling_paramsThe sampling parameters for the model.No
typeenumThe object type, which is always score_model.
Possible values: score_model
Yes

OpenAI.EvalGraderScoreModelResource

NameTypeDescriptionRequiredDefault
inputarrayThe input text. This may include template strings.Yes
modelstringThe model to use for the evaluation.Yes
namestringThe name of the grader.Yes
pass_thresholdnumberThe threshold for the score.No
rangearrayThe range of the score. Defaults to [0, 1].No
sampling_paramsThe sampling parameters for the model.No
typeenumThe object type, which is always score_model.
Possible values: score_model
Yes

OpenAI.EvalGraderStringCheckParams

NameTypeDescriptionRequiredDefault
inputstringThe input text. This may include template strings.Yes
namestringThe name of the grader.Yes
operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
Yes
referencestringThe reference text. This may include template strings.Yes
typeenumThe object type, which is always string_check.
Possible values: string_check
Yes

OpenAI.EvalGraderTextSimilarityParams

NameTypeDescriptionRequiredDefault
evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
Yes
inputstringThe text being graded.Yes
namestringThe name of the grader.Yes
pass_thresholdnumberThe threshold for the score.Yes
referencestringThe text being graded against.Yes
typeenumThe type of grader.
Possible values: text_similarity
Yes

OpenAI.EvalGraderTextSimilarityResource

NameTypeDescriptionRequiredDefault
evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
Yes
inputstringThe text being graded.Yes
namestringThe name of the grader.Yes
pass_thresholdnumberThe threshold for the score.Yes
referencestringThe text being graded against.Yes
typeenumThe type of grader.
Possible values: text_similarity
Yes

OpenAI.EvalItem

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.
NameTypeDescriptionRequiredDefault
contentobjectYes
└─ typeOpenAI.EvalItemContentTypeNo
roleenumThe role of the message input. One of user, assistant, system, or
developer.
Possible values: user, assistant, system, developer
Yes
typeenumThe type of the message input. Always message.
Possible values: message
No

OpenAI.EvalItemContent

Discriminator for OpenAI.EvalItemContent

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.EvalItemContentTypeYes

OpenAI.EvalItemContentInputText

NameTypeDescriptionRequiredDefault
textstringYes
typeenum
Possible values: input_text
Yes

OpenAI.EvalItemContentOutputText

NameTypeDescriptionRequiredDefault
textstringYes
typeenum
Possible values: output_text
Yes

OpenAI.EvalItemContentType

PropertyValue
Typestring
Valuesinput_text
output_text

OpenAI.EvalJsonlRunDataSourceParams

A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
NameTypeDescriptionRequiredDefault
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ idstringThe identifier of the file.No
└─ typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
No
typeenumThe type of data source. Always jsonl.
Possible values: jsonl
Yes

OpenAI.EvalList

An object representing a list of evals.
NameTypeDescriptionRequiredDefault
dataarrayAn array of eval objects.Yes
first_idstringThe identifier of the first eval in the data array.Yes
has_morebooleanIndicates whether there are more evals available.Yes
last_idstringThe identifier of the last eval in the data array.Yes
objectenumThe type of this object. It is always set to “list”.
Possible values: list
Yes

OpenAI.EvalLogsDataSourceConfigParams

A data source config which specifies the metadata property of your logs query. This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
NameTypeDescriptionRequiredDefault
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
typeenumThe type of data source. Always logs.
Possible values: logs
Yes

OpenAI.EvalLogsDataSourceConfigResource

A LogsDataSourceConfig which specifies the metadata property of your logs query. This is usually metadata like usecase=chatbot or prompt-version=v2, etc. The schema returned by this data source config is used to defined what variables are available in your evals. item and sample are both defined when using this data source config.
NameTypeDescriptionRequiredDefault
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
schemaobjectThe json schema for the run data source items.
Learn how to build JSON schemas here.
Yes
typeenumThe type of data source. Always logs.
Possible values: logs
Yes

OpenAI.EvalResponsesRunDataSourceParams

A ResponsesRunDataSource object describing a model sampling configuration.
NameTypeDescriptionRequiredDefault
input_messagesobjectNo
└─ item_referencestringA reference to a variable in the item namespace. Ie, “item.name”No
└─ templatearrayA list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.No
└─ typeenumThe type of input messages. Always item_reference.
Possible values: item_reference
No
modelstringThe name of the model to use for generating completions (e.g. “o3-mini”).No
sampling_paramsAzureEvalAPIResponseSamplingParamsNo
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ created_afterintegerOnly include items created after this timestamp (inclusive). This is a query parameter used to select responses.No
└─ created_beforeintegerOnly include items created before this timestamp (inclusive). This is a query parameter used to select responses.No
└─ idstringThe identifier of the file.No
└─ instructions_searchstringOptional string to search the ‘instructions’ field. This is a query parameter used to select responses.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringThe name of the model to find responses for. This is a query parameter used to select responses.No
└─ reasoning_effortOpenAI.ReasoningEffortOptional reasoning effort parameter. This is a query parameter used to select responses.No
└─ temperaturenumberSampling temperature. This is a query parameter used to select responses.No
└─ toolsarrayList of tool names. This is a query parameter used to select responses.No
└─ top_pnumberNucleus sampling parameter. This is a query parameter used to select responses.No
└─ typeenumThe type of run data source. Always responses.
Possible values: responses
No
└─ usersarrayList of user identifiers. This is a query parameter used to select responses.No
typeenumThe type of run data source. Always responses.
Possible values: responses
Yes

OpenAI.EvalRun

A schema representing an evaluation run.
NameTypeDescriptionRequiredDefault
created_atintegerUnix timestamp (in seconds) when the evaluation run was created.Yes
data_sourceobjectYes
└─ typeOpenAI.EvalRunDataSourceTypeNo
errorOpenAI.EvalApiErrorAn object representing an error response from the Eval API.Yes
eval_idstringThe identifier of the associated evaluation.Yes
idstringUnique identifier for the evaluation run.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
modelstringThe model that is evaluated, if applicable.Yes
namestringThe name of the evaluation run.Yes
objectenumThe type of the object. Always “eval.run”.
Possible values: eval.run
Yes
per_model_usagearrayUsage statistics for each model during the evaluation run.Yes
per_testing_criteria_resultsarrayResults per testing criteria applied during the evaluation run.Yes
report_urlstringThe URL to the rendered evaluation run report on the UI dashboard.Yes
result_countsobjectCounters summarizing the outcomes of the evaluation run.Yes
└─ erroredintegerNumber of output items that resulted in an error.No
└─ failedintegerNumber of output items that failed to pass the evaluation.No
└─ passedintegerNumber of output items that passed the evaluation.No
└─ totalintegerTotal number of executed output items.No
statusstringThe status of the evaluation run.Yes

OpenAI.EvalRunDataContentSource

Discriminator for OpenAI.EvalRunDataContentSource

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.EvalRunDataContentSourceTypeYes

OpenAI.EvalRunDataContentSourceType

PropertyValue
Typestring
Valuesfile_id
file_content
stored_completions
responses

OpenAI.EvalRunDataSourceCompletionsResource

NameTypeDescriptionRequiredDefault
typeenum
Possible values: completions
Yes

OpenAI.EvalRunDataSourceJsonlResource

NameTypeDescriptionRequiredDefault
typeenum
Possible values: jsonl
Yes

OpenAI.EvalRunDataSourceParams

Discriminator for OpenAI.EvalRunDataSourceParams

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.EvalRunDataSourceTypeYes

OpenAI.EvalRunDataSourceResource

NameTypeDescriptionRequiredDefault
typeOpenAI.EvalRunDataSourceTypeYes

OpenAI.EvalRunDataSourceResponsesResource

NameTypeDescriptionRequiredDefault
typeenum
Possible values: responses
Yes

OpenAI.EvalRunDataSourceType

PropertyValue
Typestring
Valuesjsonl
completions
responses

OpenAI.EvalRunFileContentDataContentSource

NameTypeDescriptionRequiredDefault
contentarrayThe content of the jsonl file.Yes
typeenumThe type of jsonl source. Always file_content.
Possible values: file_content
Yes

OpenAI.EvalRunFileIdDataContentSource

NameTypeDescriptionRequiredDefault
idstringThe identifier of the file.Yes
typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
Yes

OpenAI.EvalRunList

An object representing a list of runs for an evaluation.
NameTypeDescriptionRequiredDefault
dataarrayAn array of eval run objects.Yes
first_idstringThe identifier of the first eval run in the data array.Yes
has_morebooleanIndicates whether there are more evals available.Yes
last_idstringThe identifier of the last eval run in the data array.Yes
objectenumThe type of this object. It is always set to “list”.
Possible values: list
Yes

OpenAI.EvalRunOutputItem

A schema representing an evaluation run output item.
NameTypeDescriptionRequiredDefault
created_atintegerUnix timestamp (in seconds) when the evaluation run was created.Yes
datasource_itemobjectDetails of the input data source item.Yes
datasource_item_idintegerThe identifier for the data source item.Yes
eval_idstringThe identifier of the evaluation group.Yes
idstringUnique identifier for the evaluation run output item.Yes
objectenumThe type of the object. Always “eval.run.output_item”.
Possible values: eval.run.output_item
Yes
resultsarrayA list of results from the evaluation run.Yes
run_idstringThe identifier of the evaluation run associated with this output item.Yes
sampleobjectA sample containing the input and output of the evaluation run.Yes
└─ errorOpenAI.EvalApiErrorAn object representing an error response from the Eval API.No
└─ finish_reasonstringThe reason why the sample generation was finished.No
└─ inputarrayAn array of input messages.No
└─ max_completion_tokensintegerThe maximum number of tokens allowed for completion.No
└─ modelstringThe model used for generating the sample.No
└─ outputarrayAn array of output messages.No
└─ seedintegerThe seed used for generating the sample.No
└─ temperaturenumberThe sampling temperature used.No
└─ top_pnumberThe top_p value used for sampling.No
└─ usageobjectToken usage details for the sample.No
└─ cached_tokensintegerThe number of tokens retrieved from cache.No
└─ completion_tokensintegerThe number of completion tokens generated.No
└─ prompt_tokensintegerThe number of prompt tokens used.No
└─ total_tokensintegerThe total number of tokens used.No
statusstringThe status of the evaluation run.Yes

OpenAI.EvalRunOutputItemList

An object representing a list of output items for an evaluation run.
NameTypeDescriptionRequiredDefault
dataarrayAn array of eval run output item objects.Yes
first_idstringThe identifier of the first eval run output item in the data array.Yes
has_morebooleanIndicates whether there are more eval run output items available.Yes
last_idstringThe identifier of the last eval run output item in the data array.Yes
objectenumThe type of this object. It is always set to “list”.
Possible values: list
Yes

OpenAI.EvalRunResponsesDataContentSource

A EvalResponsesSource object describing a run data source configuration.
NameTypeDescriptionRequiredDefault
created_afterintegerOnly include items created after this timestamp (inclusive). This is a query parameter used to select responses.No
created_beforeintegerOnly include items created before this timestamp (inclusive). This is a query parameter used to select responses.No
instructions_searchstringOptional string to search the ‘instructions’ field. This is a query parameter used to select responses.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modelstringThe name of the model to find responses for. This is a query parameter used to select responses.No
reasoning_effortobjectreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
temperaturenumberSampling temperature. This is a query parameter used to select responses.No
toolsarrayList of tool names. This is a query parameter used to select responses.No
top_pnumberNucleus sampling parameter. This is a query parameter used to select responses.No
typeenumThe type of run data source. Always responses.
Possible values: responses
Yes
usersarrayList of user identifiers. This is a query parameter used to select responses.No

OpenAI.EvalRunStoredCompletionsDataContentSource

A StoredCompletionsRunDataSource configuration describing a set of filters
NameTypeDescriptionRequiredDefault
created_afterintegerAn optional Unix timestamp to filter items created after this time.No
created_beforeintegerAn optional Unix timestamp to filter items created before this time.No
limitintegerAn optional maximum number of items to return.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
modelstringAn optional model to filter by (e.g., ‘gpt-4o’).No
typeenumThe type of source. Always stored_completions.
Possible values: stored_completions
Yes

OpenAI.EvalStoredCompletionsDataSourceConfigParams

Deprecated in favor of LogsDataSourceConfig.
NameTypeDescriptionRequiredDefault
metadataobjectMetadata filters for the stored completions data source.No
typeenumThe type of data source. Always stored_completions.
Possible values: stored_completions
Yes

OpenAI.EvalStoredCompletionsDataSourceConfigResource

Deprecated in favor of LogsDataSourceConfig.
NameTypeDescriptionRequiredDefault
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
schemaobjectThe json schema for the run data source items.
Learn how to build JSON schemas here.
Yes
typeenumThe type of data source. Always stored_completions.
Possible values: stored_completions
Yes

OpenAI.FileSearchTool

A tool that searches for relevant content from uploaded files.
NameTypeDescriptionRequiredDefault
filtersobjectNo
max_num_resultsintegerThe maximum number of results to return. This number should be between 1 and 50 inclusive.No
ranking_optionsobjectNo
└─ rankerenumThe ranker to use for the file search.
Possible values: auto, default-2024-11-15
No
└─ score_thresholdnumberThe score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.No
typeenumThe type of the file search tool. Always file_search.
Possible values: file_search
Yes
vector_store_idsarrayThe IDs of the vector stores to search.Yes

OpenAI.FileSearchToolCallItemParam

The results of a file search tool call.
NameTypeDescriptionRequiredDefault
queriesarrayThe queries used to search for files.Yes
resultsarrayThe results of the file search tool call.No
typeenum
Possible values: file_search_call
Yes

OpenAI.FileSearchToolCallItemResource

The results of a file search tool call.
NameTypeDescriptionRequiredDefault
queriesarrayThe queries used to search for files.Yes
resultsarrayThe results of the file search tool call.No
statusenumThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
Possible values: in_progress, searching, completed, incomplete, failed
Yes
typeenum
Possible values: file_search_call
Yes

OpenAI.Filters

NameTypeDescriptionRequiredDefault
filtersarrayArray of filters to combine. Items can be ComparisonFilter or CompoundFilter.Yes
keystringThe key to compare against the value.Yes
typeenumType of operation: and or or.
Possible values: and, or
Yes
valuestring or number or booleanThe value to compare against the attribute key; supports string, number, or boolean types.Yes

OpenAI.FineTuneDPOHyperparameters

The hyperparameters used for the DPO fine-tuning job.
NameTypeDescriptionRequiredDefault
batch_sizeenum
Possible values: auto
No
betaenum
Possible values: auto
No
learning_rate_multiplierenum
Possible values: auto
No
n_epochsenum
Possible values: auto
No

OpenAI.FineTuneDPOMethod

Configuration for the DPO fine-tuning method.
NameTypeDescriptionRequiredDefault
hyperparametersOpenAI.FineTuneDPOHyperparametersThe hyperparameters used for the DPO fine-tuning job.No

OpenAI.FineTuneMethod

The method used for fine-tuning.
NameTypeDescriptionRequiredDefault
dpoOpenAI.FineTuneDPOMethodConfiguration for the DPO fine-tuning method.No
reinforcementAzureFineTuneReinforcementMethodNo
supervisedOpenAI.FineTuneSupervisedMethodConfiguration for the supervised fine-tuning method.No
typeenumThe type of method. Is either supervised, dpo, or reinforcement.
Possible values: supervised, dpo, reinforcement
Yes

OpenAI.FineTuneReinforcementHyperparameters

The hyperparameters used for the reinforcement fine-tuning job.
NameTypeDescriptionRequiredDefault
batch_sizeenum
Possible values: auto
No
compute_multiplierenum
Possible values: auto
No
eval_intervalenum
Possible values: auto
No
eval_samplesenum
Possible values: auto
No
learning_rate_multiplierenum
Possible values: auto
No
n_epochsenum
Possible values: auto
No
reasoning_effortenumLevel of reasoning effort.
Possible values: default, low, medium, high
No

OpenAI.FineTuneSupervisedHyperparameters

The hyperparameters used for the fine-tuning job.
NameTypeDescriptionRequiredDefault
batch_sizeenum
Possible values: auto
No
learning_rate_multiplierenum
Possible values: auto
No
n_epochsenum
Possible values: auto
No

OpenAI.FineTuneSupervisedMethod

Configuration for the supervised fine-tuning method.
NameTypeDescriptionRequiredDefault
hyperparametersOpenAI.FineTuneSupervisedHyperparametersThe hyperparameters used for the fine-tuning job.No

OpenAI.FineTuningIntegration

Discriminator for OpenAI.FineTuningIntegration

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typestring (see valid models below)Yes

OpenAI.FineTuningIntegrationWandb

NameTypeDescriptionRequiredDefault
typeenumThe type of the integration being enabled for the fine-tuning job
Possible values: wandb
Yes
wandbobjectThe settings for your integration with Weights and Biases. This payload specifies the project that
metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags
to your run, and set a default entity (team, username, etc) to be associated with your run.
Yes
└─ entitystringThe entity to use for the run. This allows you to set the team or username of the WandB user that you would
like associated with the run. If not set, the default entity for the registered WandB API key is used.
No
└─ namestringA display name to set for the run. If not set, we will use the Job ID as the name.No
└─ projectstringThe name of the project that the new run will be created under.No
└─ tagsarrayA list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some
default tags are generated by OpenAI: “openai/finetune”, “openai/{base-model}”, “openai/{ftjob-abcdef}”.
No

OpenAI.FineTuningJob

The fine_tuning.job object represents a fine-tuning job that has been created through the API.
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the fine-tuning job was created.Yes
errorobjectFor fine-tuning jobs that have failed, this will contain more information on the cause of the failure.Yes
└─ codestringA machine-readable error code.No
└─ messagestringA human-readable error message.No
└─ paramstringThe parameter that was invalid, usually training_file or validation_file. This field will be null if the failure was not parameter-specific.No
estimated_finishintegerThe Unix timestamp (in seconds) for when the fine-tuning job is estimated to finish. The value will be null if the fine-tuning job is not running.No
fine_tuned_modelstringThe name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.Yes
finished_atintegerThe Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.Yes
hyperparametersobjectThe hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.Yes
└─ batch_sizeenum
Possible values: auto
No
└─ learning_rate_multiplierenum
Possible values: auto
No
└─ n_epochsenum
Possible values: auto
No
idstringThe object identifier, which can be referenced in the API endpoints.Yes
integrationsarrayA list of integrations to enable for this fine-tuning job.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
methodOpenAI.FineTuneMethodThe method used for fine-tuning.No
modelstringThe base model that is being fine-tuned.Yes
objectenumThe object type, which is always “fine_tuning.job”.
Possible values: fine_tuning.job
Yes
organization_idstringThe organization that owns the fine-tuning job.Yes
result_filesarrayThe compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files API.Yes
seedintegerThe seed used for the fine-tuning job.Yes
statusenumThe current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.
Possible values: validating_files, queued, running, succeeded, failed, cancelled
Yes
trained_tokensintegerThe total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.Yes
training_filestringThe file ID used for training. You can retrieve the training data with the Files API.Yes
user_provided_suffixstringThe descriptive suffix applied to the job, as specified in the job creation request.No
validation_filestringThe file ID used for validation. You can retrieve the validation results with the Files API.Yes

OpenAI.FineTuningJobCheckpoint

The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the checkpoint was created.Yes
fine_tuned_model_checkpointstringThe name of the fine-tuned checkpoint model that is created.Yes
fine_tuning_job_idstringThe name of the fine-tuning job that this checkpoint was created from.Yes
idstringThe checkpoint identifier, which can be referenced in the API endpoints.Yes
metricsobjectMetrics at the step number during the fine-tuning job.Yes
└─ full_valid_lossnumberNo
└─ full_valid_mean_token_accuracynumberNo
└─ stepnumberNo
└─ train_lossnumberNo
└─ train_mean_token_accuracynumberNo
└─ valid_lossnumberNo
└─ valid_mean_token_accuracynumberNo
objectenumThe object type, which is always “fine_tuning.job.checkpoint”.
Possible values: fine_tuning.job.checkpoint
Yes
step_numberintegerThe step number that the checkpoint was created at.Yes

OpenAI.FineTuningJobEvent

Fine-tuning job event object
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the fine-tuning job was created.Yes
dataThe data associated with the event.No
idstringThe object identifier.Yes
levelenumThe log level of the event.
Possible values: info, warn, error
Yes
messagestringThe message of the event.Yes
objectenumThe object type, which is always “fine_tuning.job.event”.
Possible values: fine_tuning.job.event
Yes
typeenumThe type of event.
Possible values: message, metrics
No

OpenAI.FunctionObject

NameTypeDescriptionRequiredDefault
descriptionstringA description of what the function does, used by the model to choose when and how to call the function.No
namestringThe name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.Yes
parametersThe parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.
No
strictbooleanWhether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in the function calling guide.NoFalse

OpenAI.FunctionTool

Defines a function in your own code the model can choose to call. Learn more about function calling.
NameTypeDescriptionRequiredDefault
descriptionstringA description of the function. Used by the model to determine whether or not to call the function.No
namestringThe name of the function to call.Yes
parametersA JSON schema object describing the parameters of the function.Yes
strictbooleanWhether to enforce strict parameter validation. Default true.Yes
typeenumThe type of the function tool. Always function.
Possible values: function
Yes

OpenAI.FunctionToolCallItemParam

A tool call to run a function. See the function calling guide for more information.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments to pass to the function.Yes
call_idstringThe unique ID of the function tool call generated by the model.Yes
namestringThe name of the function to run.Yes
typeenum
Possible values: function_call
Yes

OpenAI.FunctionToolCallItemResource

A tool call to run a function. See the function calling guide for more information.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments to pass to the function.Yes
call_idstringThe unique ID of the function tool call generated by the model.Yes
namestringThe name of the function to run.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: function_call
Yes

OpenAI.FunctionToolCallOutputItemParam

The output of a function tool call.
NameTypeDescriptionRequiredDefault
call_idstringThe unique ID of the function tool call generated by the model.Yes
outputstringA JSON string of the output of the function tool call.Yes
typeenum
Possible values: function_call_output
Yes

OpenAI.FunctionToolCallOutputItemResource

The output of a function tool call.
NameTypeDescriptionRequiredDefault
call_idstringThe unique ID of the function tool call generated by the model.Yes
outputstringA JSON string of the output of the function tool call.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: function_call_output
Yes

OpenAI.Grader

Discriminator for OpenAI.Grader

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.GraderTypeYes

OpenAI.GraderLabelModel

A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
NameTypeDescriptionRequiredDefault
inputarrayYes
labelsarrayThe labels to assign to each item in the evaluation.Yes
modelstringThe model to use for the evaluation. Must support structured outputs.Yes
namestringThe name of the grader.Yes
passing_labelsarrayThe labels that indicate a passing result. Must be a subset of labels.Yes
typeenumThe object type, which is always label_model.
Possible values: label_model
Yes

OpenAI.GraderMulti

A MultiGrader object combines the output of multiple graders to produce a single score.
NameTypeDescriptionRequiredDefault
calculate_outputstringA formula to calculate the output based on grader results.Yes
gradersobjectYes
namestringThe name of the grader.Yes
typeenumThe object type, which is always multi.
Possible values: multi
Yes

OpenAI.GraderPython

A PythonGrader object that runs a python script on the input.
NameTypeDescriptionRequiredDefault
image_tagstringThe image tag to use for the python script.No
namestringThe name of the grader.Yes
sourcestringThe source code of the python script.Yes
typeenumThe object type, which is always python.
Possible values: python
Yes

OpenAI.GraderScoreModel

A ScoreModelGrader object that uses a model to assign a score to the input.
NameTypeDescriptionRequiredDefault
inputarrayThe input text. This may include template strings.Yes
modelstringThe model to use for the evaluation.Yes
namestringThe name of the grader.Yes
rangearrayThe range of the score. Defaults to [0, 1].No
sampling_paramsThe sampling parameters for the model.No
typeenumThe object type, which is always score_model.
Possible values: score_model
Yes

OpenAI.GraderStringCheck

A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
NameTypeDescriptionRequiredDefault
inputstringThe input text. This may include template strings.Yes
namestringThe name of the grader.Yes
operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
Yes
referencestringThe reference text. This may include template strings.Yes
typeenumThe object type, which is always string_check.
Possible values: string_check
Yes

OpenAI.GraderTextSimilarity

A TextSimilarityGrader object which grades text based on similarity metrics.
NameTypeDescriptionRequiredDefault
evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
Yes
inputstringThe text being graded.Yes
namestringThe name of the grader.Yes
referencestringThe text being graded against.Yes
typeenumThe type of grader.
Possible values: text_similarity
Yes

OpenAI.GraderType

PropertyValue
Typestring
Valuesstring_check
text_similarity
score_model
label_model
python
multi

OpenAI.ImageGenTool

A tool that generates images using a model like gpt-image-1-series.
NameTypeDescriptionRequiredDefault
backgroundenumBackground type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Possible values: transparent, opaque, auto
No
input_image_maskobjectOptional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
No
└─ file_idstringFile ID for the mask image.No
└─ image_urlstringBase64-encoded mask image.No
modelenumThe image generation model to use. Default: gpt-image-1.
Possible values: gpt-image-1
No
moderationenumModeration level for the generated image. Default: auto.
Possible values: auto, low
No
output_compressionintegerCompression level for the output image. Default: 100.No100
output_formatenumThe output format of the generated image. One of png, webp, or
jpeg. Default: png.
Possible values: png, webp, jpeg
No
partial_imagesintegerNumber of partial images to generate in streaming mode, from 0 (default value) to 3.No0
qualityenumThe quality of the generated image. One of low, medium, high,
or auto. Default: auto.
Possible values: low, medium, high, auto
No
sizeenumThe size of the generated image. One of 1024x1024, 1024x1536,
1536x1024, or auto. Default: auto.
Possible values: 1024x1024, 1024x1536, 1536x1024, auto
No
typeenumThe type of the image generation tool. Always image_generation.
Possible values: image_generation
Yes

OpenAI.ImageGenToolCallItemParam

An image generation request made by the model.
NameTypeDescriptionRequiredDefault
resultstringThe generated image encoded in base64.Yes
typeenum
Possible values: image_generation_call
Yes

OpenAI.ImageGenToolCallItemResource

An image generation request made by the model.
NameTypeDescriptionRequiredDefault
resultstringThe generated image encoded in base64.Yes
statusenum
Possible values: in_progress, completed, generating, failed
Yes
typeenum
Possible values: image_generation_call
Yes

OpenAI.ImplicitUserMessage

NameTypeDescriptionRequiredDefault
contentstring or arrayYes

OpenAI.Includable

Specify additional output data to include in the model response. Currently supported values are:
  • code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.
  • computer_call_output.output.image_url: Include image urls from the computer call output.
  • file_search_call.results: Include the search results of the file search tool call.
  • message.input_image.image_url: Include image urls from the input message.
  • message.output_text.logprobs: Include logprobs with assistant messages.
  • reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).
PropertyValue
DescriptionSpecify additional output data to include in the model response. Currently
supported values are:
- code_interpreter_call.outputs: Includes the outputs of python code execution
in code interpreter tool call items.
- computer_call_output.output.image_url: Include image urls from the computer call output.
- file_search_call.results: Include the search results of
the file search tool call.
- message.input_image.image_url: Include image urls from the input message.
- message.output_text.logprobs: Include logprobs with assistant messages.
- reasoning.encrypted_content: Includes an encrypted version of reasoning
tokens in reasoning item outputs. This enables reasoning items to be used in
multi-turn conversations when using the Responses API statelessly (like
when the store parameter is set to false, or when an organization is
enrolled in the zero data retention program).
Typestring
Valuescode_interpreter_call.outputs
computer_call_output.output.image_url
file_search_call.results
message.input_image.image_url
message.output_text.logprobs
reasoning.encrypted_content

OpenAI.ItemContent

Discriminator for OpenAI.ItemContent

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ItemContentTypeMulti-modal input and output contents.Yes

OpenAI.ItemContentInputAudio

An audio input to the model.
NameTypeDescriptionRequiredDefault
datastringBase64-encoded audio data.Yes
formatenumThe format of the audio data. Currently supported formats are mp3 and
wav.
Possible values: mp3, wav
Yes
typeenumThe type of the input item. Always input_audio.
Possible values: input_audio
Yes

OpenAI.ItemContentInputFile

A file input to the model.
NameTypeDescriptionRequiredDefault
file_datastringThe content of the file to be sent to the model.No
file_idstringThe ID of the file to be sent to the model.No
filenamestringThe name of the file to be sent to the model.No
typeenumThe type of the input item. Always input_file.
Possible values: input_file
Yes

OpenAI.ItemContentInputImage

An image input to the model.
NameTypeDescriptionRequiredDefault
detailenumThe detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
Possible values: low, high, auto
No
file_idstringThe ID of the file to be sent to the model.No
image_urlstringThe URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.No
typeenumThe type of the input item. Always input_image.
Possible values: input_image
Yes

OpenAI.ItemContentInputText

A text input to the model.
NameTypeDescriptionRequiredDefault
textstringThe text input to the model.Yes
typeenumThe type of the input item. Always input_text.
Possible values: input_text
Yes

OpenAI.ItemContentOutputAudio

An audio output from the model.
NameTypeDescriptionRequiredDefault
datastringBase64-encoded audio data from the model.Yes
transcriptstringThe transcript of the audio data from the model.Yes
typeenumThe type of the output audio. Always output_audio.
Possible values: output_audio
Yes

OpenAI.ItemContentOutputText

A text output from the model.
NameTypeDescriptionRequiredDefault
annotationsarrayThe annotations of the text output.Yes
logprobsarrayNo
textstringThe text output from the model.Yes
typeenumThe type of the output text. Always output_text.
Possible values: output_text
Yes

OpenAI.ItemContentRefusal

A refusal from the model.
NameTypeDescriptionRequiredDefault
refusalstringThe refusal explanationfrom the model.Yes
typeenumThe type of the refusal. Always refusal.
Possible values: refusal
Yes

OpenAI.ItemContentType

Multi-modal input and output contents.
PropertyValue
DescriptionMulti-modal input and output contents.
Typestring
Valuesinput_text
input_audio
input_image
input_file
output_text
output_audio
refusal

OpenAI.ItemParam

Content item used to generate a response.

Discriminator for OpenAI.ItemParam

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ItemTypeYes

OpenAI.ItemReferenceItemParam

An internal identifier for an item to reference.
NameTypeDescriptionRequiredDefault
idstringThe service-originated ID of the previously generated response item being referenced.Yes
typeenum
Possible values: item_reference
Yes

OpenAI.ItemResource

Content item used to generate a response.

Discriminator for OpenAI.ItemResource

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
idstringYes
typeOpenAI.ItemTypeYes

OpenAI.ItemType

PropertyValue
Typestring
Valuesmessage
file_search_call
function_call
function_call_output
computer_call
computer_call_output
web_search_call
reasoning
item_reference
image_generation_call
code_interpreter_call
local_shell_call
local_shell_call_output
mcp_list_tools
mcp_approval_request
mcp_approval_response
mcp_call

OpenAI.ListFineTuningJobCheckpointsResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
first_idstringNo
has_morebooleanYes
last_idstringNo
objectenum
Possible values: list
Yes

OpenAI.ListFineTuningJobEventsResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
has_morebooleanYes
objectenum
Possible values: list
Yes

OpenAI.ListModelsResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
objectenum
Possible values: list
Yes

OpenAI.ListPaginatedFineTuningJobsResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
has_morebooleanYes
objectenum
Possible values: list
Yes

OpenAI.ListVectorStoreFilesFilter

PropertyValue
Typestring
Valuesin_progress
completed
failed
cancelled

OpenAI.ListVectorStoreFilesResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
first_idstringYes
has_morebooleanYes
last_idstringYes
objectenum
Possible values: list
Yes

OpenAI.ListVectorStoresResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
first_idstringYes
has_morebooleanYes
last_idstringYes
objectenum
Possible values: list
Yes

OpenAI.LocalShellExecAction

Execute a shell command on the server.
NameTypeDescriptionRequiredDefault
commandarrayThe command to run.Yes
envobjectEnvironment variables to set for the command.Yes
timeout_msintegerOptional timeout in milliseconds for the command.No
typeenumThe type of the local shell action. Always exec.
Possible values: exec
Yes
userstringOptional user to run the command as.No
working_directorystringOptional working directory to run the command in.No

OpenAI.LocalShellTool

A tool that allows the model to execute shell commands in a local environment.
NameTypeDescriptionRequiredDefault
typeenumThe type of the local shell tool. Always local_shell.
Possible values: local_shell
Yes

OpenAI.LocalShellToolCallItemParam

A tool call to run a command on the local shell.
NameTypeDescriptionRequiredDefault
actionOpenAI.LocalShellExecActionExecute a shell command on the server.Yes
call_idstringThe unique ID of the local shell tool call generated by the model.Yes
typeenum
Possible values: local_shell_call
Yes

OpenAI.LocalShellToolCallItemResource

A tool call to run a command on the local shell.
NameTypeDescriptionRequiredDefault
actionOpenAI.LocalShellExecActionExecute a shell command on the server.Yes
call_idstringThe unique ID of the local shell tool call generated by the model.Yes
statusenum
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: local_shell_call
Yes

OpenAI.LocalShellToolCallOutputItemParam

The output of a local shell tool call.
NameTypeDescriptionRequiredDefault
outputstringA JSON string of the output of the local shell tool call.Yes
typeenum
Possible values: local_shell_call_output
Yes

OpenAI.LocalShellToolCallOutputItemResource

The output of a local shell tool call.
NameTypeDescriptionRequiredDefault
outputstringA JSON string of the output of the local shell tool call.Yes
statusenum
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: local_shell_call_output
Yes

OpenAI.Location

Discriminator for OpenAI.Location

This component uses the property type to discriminate between different types:
Type ValueSchema
approximateOpenAI.ApproximateLocation
NameTypeDescriptionRequiredDefault
typeOpenAI.LocationTypeYes

OpenAI.LocationType

PropertyValue
Typestring
Valuesapproximate

OpenAI.LogProb

The log probability of a token.
NameTypeDescriptionRequiredDefault
bytesarrayYes
logprobnumberYes
tokenstringYes
top_logprobsarrayYes

OpenAI.MCPApprovalRequestItemParam

A request for human approval of a tool invocation.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of arguments for the tool.Yes
namestringThe name of the tool to run.Yes
server_labelstringThe label of the MCP server making the request.Yes
typeenum
Possible values: mcp_approval_request
Yes

OpenAI.MCPApprovalRequestItemResource

A request for human approval of a tool invocation.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of arguments for the tool.Yes
namestringThe name of the tool to run.Yes
server_labelstringThe label of the MCP server making the request.Yes
typeenum
Possible values: mcp_approval_request
Yes

OpenAI.MCPApprovalResponseItemParam

A response to an MCP approval request.
NameTypeDescriptionRequiredDefault
approval_request_idstringThe ID of the approval request being answered.Yes
approvebooleanWhether the request was approved.Yes
reasonstringOptional reason for the decision.No
typeenum
Possible values: mcp_approval_response
Yes

OpenAI.MCPApprovalResponseItemResource

A response to an MCP approval request.
NameTypeDescriptionRequiredDefault
approval_request_idstringThe ID of the approval request being answered.Yes
approvebooleanWhether the request was approved.Yes
reasonstringOptional reason for the decision.No
typeenum
Possible values: mcp_approval_response
Yes

OpenAI.MCPCallItemParam

An invocation of a tool on an MCP server.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments passed to the tool.Yes
errorstringThe error from the tool call, if any.No
namestringThe name of the tool that was run.Yes
outputstringThe output from the tool call.No
server_labelstringThe label of the MCP server running the tool.Yes
typeenum
Possible values: mcp_call
Yes

OpenAI.MCPCallItemResource

An invocation of a tool on an MCP server.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments passed to the tool.Yes
errorstringThe error from the tool call, if any.No
namestringThe name of the tool that was run.Yes
outputstringThe output from the tool call.No
server_labelstringThe label of the MCP server running the tool.Yes
typeenum
Possible values: mcp_call
Yes

OpenAI.MCPListToolsItemParam

A list of tools available on an MCP server.
NameTypeDescriptionRequiredDefault
errorstringError message if the server could not list tools.No
server_labelstringThe label of the MCP server.Yes
toolsarrayThe tools available on the server.Yes
typeenum
Possible values: mcp_list_tools
Yes

OpenAI.MCPListToolsItemResource

A list of tools available on an MCP server.
NameTypeDescriptionRequiredDefault
errorstringError message if the server could not list tools.No
server_labelstringThe label of the MCP server.Yes
toolsarrayThe tools available on the server.Yes
typeenum
Possible values: mcp_list_tools
Yes

OpenAI.MCPListToolsTool

A tool available on an MCP server.
NameTypeDescriptionRequiredDefault
annotationsAdditional annotations about the tool.No
descriptionstringThe description of the tool.No
input_schemaThe JSON schema describing the tool’s input.Yes
namestringThe name of the tool.Yes

OpenAI.MCPTool

Give the model access to additional tools via remote Model Context Protocol (MCP) servers.
NameTypeDescriptionRequiredDefault
allowed_toolsobjectNo
└─ tool_namesarrayList of allowed tool names.No
headersobjectOptional HTTP headers to send to the MCP server. Use for authentication
or other purposes.
No
require_approvalobject (see valid models below)Specify which of the MCP server’s tools require approval.No
server_labelstringA label for this MCP server, used to identify it in tool calls.Yes
server_urlstringThe URL for the MCP server.Yes
typeenumThe type of the MCP tool. Always mcp.
Possible values: mcp
Yes

OpenAI.MetadataPropertyForRequest

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
NameTypeDescriptionRequiredDefault
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

OpenAI.Model

Describes an OpenAI model offering that can be used with the API.
NameTypeDescriptionRequiredDefault
createdintegerThe Unix timestamp (in seconds) when the model was created.Yes
idstringThe model identifier, which can be referenced in the API endpoints.Yes
objectenumThe object type, which is always “model”.
Possible values: model
Yes
owned_bystringThe organization that owns the model.Yes

OpenAI.OtherChunkingStrategyResponseParam

This is returned when the chunking strategy is unknown. Typically, this is because the file was indexed before the chunking_strategy concept was introduced in the API.
NameTypeDescriptionRequiredDefault
typeenumAlways other.
Possible values: other
Yes

OpenAI.ParallelToolCalls

Whether to enable parallel function calling during tool use. Type: boolean

OpenAI.Prompt

Reference to a prompt template and its variables.
NameTypeDescriptionRequiredDefault
idstringThe unique identifier of the prompt template to use.Yes
variablesobjectOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
versionstringOptional version of the prompt template.No

OpenAI.RankingOptions

NameTypeDescriptionRequiredDefault
rankerenumThe ranker to use for the file search.
Possible values: auto, default-2024-11-15
No
score_thresholdnumberThe score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.No

OpenAI.Reasoning

reasoning models only Configuration options for reasoning models.
NameTypeDescriptionRequiredDefault
effortobjectreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
generate_summaryenumDeprecated: use summary instead.

A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No

OpenAI.ReasoningEffort

reasoning models only Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
PropertyValue
Descriptionreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
Typestring
Valueslow
medium
high

OpenAI.ReasoningItemParam

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.
NameTypeDescriptionRequiredDefault
encrypted_contentstringThe encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
No
summaryarrayReasoning text contents.Yes
typeenum
Possible values: reasoning
Yes

OpenAI.ReasoningItemResource

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing context.
NameTypeDescriptionRequiredDefault
encrypted_contentstringThe encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
No
summaryarrayReasoning text contents.Yes
typeenum
Possible values: reasoning
Yes

OpenAI.ReasoningItemSummaryPart

Discriminator for OpenAI.ReasoningItemSummaryPart

This component uses the property type to discriminate between different types:
Type ValueSchema
summary_textOpenAI.ReasoningItemSummaryTextPart
NameTypeDescriptionRequiredDefault
typeOpenAI.ReasoningItemSummaryPartTypeYes

OpenAI.ReasoningItemSummaryPartType

PropertyValue
Typestring
Valuessummary_text

OpenAI.ReasoningItemSummaryTextPart

NameTypeDescriptionRequiredDefault
textstringYes
typeenum
Possible values: summary_text
Yes

OpenAI.Response

NameTypeDescriptionRequiredDefault
backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
created_atintegerUnix timestamp (in seconds) of when this Response was created.Yes
errorobjectAn error object returned when the model fails to generate a Response.Yes
└─ codeOpenAI.ResponseErrorCodeThe error code for the response.No
└─ messagestringA human-readable description of the error.No
idstringUnique identifier for this Response.Yes
incomplete_detailsobjectDetails about why the response is incomplete.Yes
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
instructionsstring or arrayYes
max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
objectenumThe object type of this resource - always set to response.
Possible values: response
Yes
outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
Yes
output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.YesTrue
previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
promptobjectReference to a prompt template and its variables.
No
└─ idstringThe unique identifier of the prompt template to use.No
└─ variablesOpenAI.ResponsePromptVariablesOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
└─ versionstringOptional version of the prompt template.No
reasoningobjectreasoning models only

Configuration options for
reasoning models.
No
└─ effortOpenAI.ReasoningEffortreasoning models only

Constrains effort on reasoning for
reasoning models.
Currently supported values are low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
No
└─ generate_summaryenumDeprecated: use summary instead.

A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
└─ summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Yes
textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
tool_choiceobjectControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
No
└─ typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response..No
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
Yes
truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.Yes

OpenAI.ResponseCodeInterpreterCallCodeDeltaEvent

Emitted when a partial code snippet is streamed by the code interpreter.
NameTypeDescriptionRequiredDefault
deltastringThe partial code snippet being streamed by the code interpreter.Yes
item_idstringThe unique identifier of the code interpreter tool call item.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item in the response for which the code is being streamed.Yes
typeenumThe type of the event. Always response.code_interpreter_call_code.delta.
Possible values: response.code_interpreter_call_code.delta
Yes

OpenAI.ResponseCodeInterpreterCallCodeDoneEvent

Emitted when the code snippet is finalized by the code interpreter.
NameTypeDescriptionRequiredDefault
codestringThe final code snippet output by the code interpreter.Yes
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code is finalized.Yes
typeenumThe type of the event. Always response.code_interpreter_call_code.done.
Possible values: response.code_interpreter_call_code.done
Yes

OpenAI.ResponseCodeInterpreterCallCompletedEvent

Emitted when the code interpreter call is completed.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code interpreter call is completed.Yes
typeenumThe type of the event. Always response.code_interpreter_call.completed.
Possible values: response.code_interpreter_call.completed
Yes

OpenAI.ResponseCodeInterpreterCallInProgressEvent

Emitted when a code interpreter call is in progress.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code interpreter call is in progress.Yes
typeenumThe type of the event. Always response.code_interpreter_call.in_progress.
Possible values: response.code_interpreter_call.in_progress
Yes

OpenAI.ResponseCodeInterpreterCallInterpretingEvent

Emitted when the code interpreter is actively interpreting the code snippet.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code interpreter is interpreting code.Yes
typeenumThe type of the event. Always response.code_interpreter_call.interpreting.
Possible values: response.code_interpreter_call.interpreting
Yes

OpenAI.ResponseCompletedEvent

Emitted when the model response is complete.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
No
└─ reasoningOpenAI.Reasoningreasoning models only

Configuration options for
reasoning models.
No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No
typeenumThe type of the event. Always response.completed.
Possible values: response.completed
Yes

OpenAI.ResponseContentPartAddedEvent

Emitted when a new content part is added.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that was added.Yes
item_idstringThe ID of the output item that the content part was added to.Yes
output_indexintegerThe index of the output item that the content part was added to.Yes
partobjectYes
└─ typeOpenAI.ItemContentTypeMulti-modal input and output contents.No
typeenumThe type of the event. Always response.content_part.added.
Possible values: response.content_part.added
Yes

OpenAI.ResponseContentPartDoneEvent

Emitted when a content part is done.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that is done.Yes
item_idstringThe ID of the output item that the content part was added to.Yes
output_indexintegerThe index of the output item that the content part was added to.Yes
partobjectYes
└─ typeOpenAI.ItemContentTypeMulti-modal input and output contents.No
typeenumThe type of the event. Always response.content_part.done.
Possible values: response.content_part.done
Yes

OpenAI.ResponseCreatedEvent

An event that is emitted when a response is created.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
No
└─ reasoningOpenAI.Reasoningreasoning models only

Configuration options for
reasoning models.
No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No
typeenumThe type of the event. Always response.created.
Possible values: response.created
Yes

OpenAI.ResponseError

An error object returned when the model fails to generate a Response.
NameTypeDescriptionRequiredDefault
codeOpenAI.ResponseErrorCodeThe error code for the response.Yes
messagestringA human-readable description of the error.Yes

OpenAI.ResponseErrorCode

The error code for the response.
PropertyValue
DescriptionThe error code for the response.
Typestring
Valuesserver_error
rate_limit_exceeded
invalid_prompt
vector_store_timeout
invalid_image
invalid_image_format
invalid_base64_image
invalid_image_url
image_too_large
image_too_small
image_parse_error
image_content_policy_violation
invalid_image_mode
image_file_too_large
unsupported_image_media_type
empty_image_file
failed_to_download_image
image_file_not_found

OpenAI.ResponseErrorEvent

Emitted when an error occurs.
NameTypeDescriptionRequiredDefault
codestringThe error code.Yes
messagestringThe error message.Yes
paramstringThe error parameter.Yes
typeenumThe type of the event. Always error.
Possible values: error
Yes

OpenAI.ResponseFailedEvent

An event that is emitted when a response fails.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
No
└─ reasoningOpenAI.Reasoningreasoning models only

Configuration options for
reasoning models.
No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No
typeenumThe type of the event. Always response.failed.
Possible values: response.failed
Yes

OpenAI.ResponseFileSearchCallCompletedEvent

Emitted when a file search call is completed (results found).
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the output item that the file search call is initiated.Yes
output_indexintegerThe index of the output item that the file search call is initiated.Yes
typeenumThe type of the event. Always response.file_search_call.completed.
Possible values: response.file_search_call.completed
Yes

OpenAI.ResponseFileSearchCallInProgressEvent

Emitted when a file search call is initiated.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the output item that the file search call is initiated.Yes
output_indexintegerThe index of the output item that the file search call is initiated.Yes
typeenumThe type of the event. Always response.file_search_call.in_progress.
Possible values: response.file_search_call.in_progress
Yes

OpenAI.ResponseFileSearchCallSearchingEvent

Emitted when a file search is currently searching.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the output item that the file search call is initiated.Yes
output_indexintegerThe index of the output item that the file search call is searching.Yes
typeenumThe type of the event. Always response.file_search_call.searching.
Possible values: response.file_search_call.searching
Yes

OpenAI.ResponseFormat

Discriminator for OpenAI.ResponseFormat

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeenum
Possible values: text, json_object, json_schema
Yes

OpenAI.ResponseFormatJsonObject

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.
NameTypeDescriptionRequiredDefault
typeenumThe type of response format being defined. Always json_object.
Possible values: json_object
Yes

OpenAI.ResponseFormatJsonSchema

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
NameTypeDescriptionRequiredDefault
json_schemaobjectStructured Outputs configuration options, including a JSON Schema.Yes
└─ descriptionstringA description of what the response format is for, used by the model to
determine how to respond in the format.
No
└─ namestringThe name of the response format. Must be a-z, A-Z, 0-9, or contain
underscores and dashes, with a maximum length of 64.
No
└─ schemaOpenAI.ResponseFormatJsonSchemaSchemaThe schema for the response format, described as a JSON Schema object.
Learn how to build JSON schemas here.
No
└─ strictbooleanWhether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide
.
NoFalse
typeenumThe type of response format being defined. Always json_schema.
Possible values: json_schema
Yes

OpenAI.ResponseFormatJsonSchemaSchema

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here. Type: object

OpenAI.ResponseFormatText

Default response format. Used to generate text responses.
NameTypeDescriptionRequiredDefault
typeenumThe type of response format being defined. Always text.
Possible values: text
Yes

OpenAI.ResponseFunctionCallArgumentsDeltaEvent

Emitted when there is a partial function-call arguments delta.
NameTypeDescriptionRequiredDefault
deltastringThe function-call arguments delta that is added.Yes
item_idstringThe ID of the output item that the function-call arguments delta is added to.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item that the function-call arguments delta is added to.Yes
typeenumThe type of the event. Always response.function_call_arguments.delta.
Possible values: response.function_call_arguments.delta
Yes

OpenAI.ResponseFunctionCallArgumentsDoneEvent

Emitted when function-call arguments are finalized.
NameTypeDescriptionRequiredDefault
argumentsstringThe function-call arguments.Yes
item_idstringThe ID of the item.Yes
output_indexintegerThe index of the output item.Yes
typeenum
Possible values: response.function_call_arguments.done
Yes

OpenAI.ResponseImageGenCallCompletedEvent

Emitted when an image generation tool call has completed and the final image is available.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.image_generation_call.completed’.
Possible values: response.image_generation_call.completed
Yes

OpenAI.ResponseImageGenCallGeneratingEvent

Emitted when an image generation tool call is actively generating an image (intermediate state).
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.image_generation_call.generating’.
Possible values: response.image_generation_call.generating
Yes

OpenAI.ResponseImageGenCallInProgressEvent

Emitted when an image generation tool call is in progress.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.image_generation_call.in_progress’.
Possible values: response.image_generation_call.in_progress
Yes

OpenAI.ResponseImageGenCallPartialImageEvent

Emitted when a partial image is available during image generation streaming.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
partial_image_b64stringBase64-encoded partial image data, suitable for rendering as an image.Yes
partial_image_indexinteger0-based index for the partial image (backend is 1-based, but this is 0-based for the user).Yes
typeenumThe type of the event. Always ‘response.image_generation_call.partial_image’.
Possible values: response.image_generation_call.partial_image
Yes

OpenAI.ResponseInProgressEvent

Emitted when the response is in progress.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
No
└─ reasoningOpenAI.Reasoningreasoning models only

Configuration options for
reasoning models.
No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No
typeenumThe type of the event. Always response.in_progress.
Possible values: response.in_progress
Yes

OpenAI.ResponseIncompleteEvent

An event that is emitted when a response finishes as incomplete.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
No
└─ reasoningOpenAI.Reasoningreasoning models only

Configuration options for
reasoning models.
No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No
typeenumThe type of the event. Always response.incomplete.
Possible values: response.incomplete
Yes

OpenAI.ResponseItemList

A list of Response items.
NameTypeDescriptionRequiredDefault
dataarrayA list of items used to generate this response.Yes
first_idstringThe ID of the first item in the list.Yes
has_morebooleanWhether there are more items available.Yes
last_idstringThe ID of the last item in the list.Yes
objectenumThe type of object returned, must be list.
Possible values: list
Yes

OpenAI.ResponseMCPCallArgumentsDeltaEvent

Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
NameTypeDescriptionRequiredDefault
deltaThe partial update to the arguments for the MCP tool call.Yes
item_idstringThe unique identifier of the MCP tool call item being processed.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.mcp_call.arguments_delta’.
Possible values: response.mcp_call.arguments_delta
Yes

OpenAI.ResponseMCPCallArgumentsDoneEvent

Emitted when the arguments for an MCP tool call are finalized.
NameTypeDescriptionRequiredDefault
argumentsThe finalized arguments for the MCP tool call.Yes
item_idstringThe unique identifier of the MCP tool call item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.mcp_call.arguments_done’.
Possible values: response.mcp_call.arguments_done
Yes

OpenAI.ResponseMCPCallCompletedEvent

Emitted when an MCP tool call has completed successfully.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_call.completed’.
Possible values: response.mcp_call.completed
Yes

OpenAI.ResponseMCPCallFailedEvent

Emitted when an MCP tool call has failed.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_call.failed’.
Possible values: response.mcp_call.failed
Yes

OpenAI.ResponseMCPCallInProgressEvent

Emitted when an MCP tool call is in progress.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the MCP tool call item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.mcp_call.in_progress’.
Possible values: response.mcp_call.in_progress
Yes

OpenAI.ResponseMCPListToolsCompletedEvent

Emitted when the list of available MCP tools has been successfully retrieved.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_list_tools.completed’.
Possible values: response.mcp_list_tools.completed
Yes

OpenAI.ResponseMCPListToolsFailedEvent

Emitted when the attempt to list available MCP tools has failed.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_list_tools.failed’.
Possible values: response.mcp_list_tools.failed
Yes

OpenAI.ResponseMCPListToolsInProgressEvent

Emitted when the system is in the process of retrieving the list of available MCP tools.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_list_tools.in_progress’.
Possible values: response.mcp_list_tools.in_progress
Yes

OpenAI.ResponseOutputItemAddedEvent

Emitted when a new output item is added.
NameTypeDescriptionRequiredDefault
itemobjectContent item used to generate a response.Yes
└─ idstringNo
└─ typeOpenAI.ItemTypeNo
output_indexintegerThe index of the output item that was added.Yes
typeenumThe type of the event. Always response.output_item.added.
Possible values: response.output_item.added
Yes

OpenAI.ResponseOutputItemDoneEvent

Emitted when an output item is marked done.
NameTypeDescriptionRequiredDefault
itemobjectContent item used to generate a response.Yes
└─ idstringNo
└─ typeOpenAI.ItemTypeNo
output_indexintegerThe index of the output item that was marked done.Yes
typeenumThe type of the event. Always response.output_item.done.
Possible values: response.output_item.done
Yes

OpenAI.ResponsePromptVariables

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files. Type: object

OpenAI.ResponseQueuedEvent

Emitted when a response is queued and waiting to be processed.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ backgroundbooleanWhether to run the model response in the background.
Learn more.
NoFalse
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
No
└─ reasoningOpenAI.Reasoningreasoning models only

Configuration options for
reasoning models.
No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. Learn more: Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search or file search.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringA unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.No
typeenumThe type of the event. Always ‘response.queued’.
Possible values: response.queued
Yes

OpenAI.ResponseReasoningDeltaEvent

Emitted when there is a delta (partial update) to the reasoning content.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the reasoning content part within the output item.Yes
deltaThe partial update to the reasoning content.Yes
item_idstringThe unique identifier of the item for which reasoning is being updated.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.reasoning.delta’.
Possible values: response.reasoning.delta
Yes

OpenAI.ResponseReasoningDoneEvent

Emitted when the reasoning content is finalized for an item.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the reasoning content part within the output item.Yes
item_idstringThe unique identifier of the item for which reasoning is finalized.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
textstringThe finalized reasoning text.Yes
typeenumThe type of the event. Always ‘response.reasoning.done’.
Possible values: response.reasoning.done
Yes

OpenAI.ResponseReasoningSummaryDeltaEvent

Emitted when there is a delta (partial update) to the reasoning summary content.
NameTypeDescriptionRequiredDefault
deltaThe partial update to the reasoning summary content.Yes
item_idstringThe unique identifier of the item for which the reasoning summary is being updated.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
summary_indexintegerThe index of the summary part within the output item.Yes
typeenumThe type of the event. Always ‘response.reasoning_summary.delta’.
Possible values: response.reasoning_summary.delta
Yes

OpenAI.ResponseReasoningSummaryDoneEvent

Emitted when the reasoning summary content is finalized for an item.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the item for which the reasoning summary is finalized.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
summary_indexintegerThe index of the summary part within the output item.Yes
textstringThe finalized reasoning summary text.Yes
typeenumThe type of the event. Always ‘response.reasoning_summary.done’.
Possible values: response.reasoning_summary.done
Yes

OpenAI.ResponseReasoningSummaryPartAddedEvent

Emitted when a new reasoning summary part is added.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the item this summary part is associated with.Yes
output_indexintegerThe index of the output item this summary part is associated with.Yes
partobjectYes
└─ typeOpenAI.ReasoningItemSummaryPartTypeNo
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_part.added.
Possible values: response.reasoning_summary_part.added
Yes

OpenAI.ResponseReasoningSummaryPartDoneEvent

Emitted when a reasoning summary part is completed.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the item this summary part is associated with.Yes
output_indexintegerThe index of the output item this summary part is associated with.Yes
partobjectYes
└─ typeOpenAI.ReasoningItemSummaryPartTypeNo
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_part.done.
Possible values: response.reasoning_summary_part.done
Yes

OpenAI.ResponseReasoningSummaryTextDeltaEvent

Emitted when a delta is added to a reasoning summary text.
NameTypeDescriptionRequiredDefault
deltastringThe text delta that was added to the summary.Yes
item_idstringThe ID of the item this summary text delta is associated with.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item this summary text delta is associated with.Yes
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_text.delta.
Possible values: response.reasoning_summary_text.delta
Yes

OpenAI.ResponseReasoningSummaryTextDoneEvent

Emitted when a reasoning summary text is completed.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the item this summary text is associated with.Yes
output_indexintegerThe index of the output item this summary text is associated with.Yes
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
textstringThe full text of the completed reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_text.done.
Possible values: response.reasoning_summary_text.done
Yes

OpenAI.ResponseRefusalDeltaEvent

Emitted when there is a partial refusal text.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the refusal text is added to.Yes
deltastringThe refusal text that is added.Yes
item_idstringThe ID of the output item that the refusal text is added to.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item that the refusal text is added to.Yes
typeenumThe type of the event. Always response.refusal.delta.
Possible values: response.refusal.delta
Yes

OpenAI.ResponseRefusalDoneEvent

Emitted when refusal text is finalized.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the refusal text is finalized.Yes
item_idstringThe ID of the output item that the refusal text is finalized.Yes
output_indexintegerThe index of the output item that the refusal text is finalized.Yes
refusalstringThe refusal text that is finalized.Yes
typeenumThe type of the event. Always response.refusal.done.
Possible values: response.refusal.done
Yes

OpenAI.ResponseStreamEvent

Discriminator for OpenAI.ResponseStreamEvent

This component uses the property type to discriminate between different types:
Type ValueSchema
response.completedOpenAI.ResponseCompletedEvent
response.content_part.addedOpenAI.ResponseContentPartAddedEvent
response.content_part.doneOpenAI.ResponseContentPartDoneEvent
response.createdOpenAI.ResponseCreatedEvent
errorOpenAI.ResponseErrorEvent
response.file_search_call.completedOpenAI.ResponseFileSearchCallCompletedEvent
response.file_search_call.in_progressOpenAI.ResponseFileSearchCallInProgressEvent
response.file_search_call.searchingOpenAI.ResponseFileSearchCallSearchingEvent
response.function_call_arguments.deltaOpenAI.ResponseFunctionCallArgumentsDeltaEvent
response.function_call_arguments.doneOpenAI.ResponseFunctionCallArgumentsDoneEvent
response.in_progressOpenAI.ResponseInProgressEvent
response.failedOpenAI.ResponseFailedEvent
response.incompleteOpenAI.ResponseIncompleteEvent
response.output_item.addedOpenAI.ResponseOutputItemAddedEvent
response.output_item.doneOpenAI.ResponseOutputItemDoneEvent
response.refusal.deltaOpenAI.ResponseRefusalDeltaEvent
response.refusal.doneOpenAI.ResponseRefusalDoneEvent
response.output_text.deltaOpenAI.ResponseTextDeltaEvent
response.output_text.doneOpenAI.ResponseTextDoneEvent
response.reasoning_summary_part.addedOpenAI.ResponseReasoningSummaryPartAddedEvent
response.reasoning_summary_part.doneOpenAI.ResponseReasoningSummaryPartDoneEvent
response.reasoning_summary_text.deltaOpenAI.ResponseReasoningSummaryTextDeltaEvent
response.reasoning_summary_text.doneOpenAI.ResponseReasoningSummaryTextDoneEvent
response.web_search_call.completedOpenAI.ResponseWebSearchCallCompletedEvent
response.web_search_call.in_progressOpenAI.ResponseWebSearchCallInProgressEvent
response.web_search_call.searchingOpenAI.ResponseWebSearchCallSearchingEvent
response.image_generation_call.completedOpenAI.ResponseImageGenCallCompletedEvent
response.image_generation_call.generatingOpenAI.ResponseImageGenCallGeneratingEvent
response.image_generation_call.in_progressOpenAI.ResponseImageGenCallInProgressEvent
response.image_generation_call.partial_imageOpenAI.ResponseImageGenCallPartialImageEvent
response.mcp_call.arguments_deltaOpenAI.ResponseMCPCallArgumentsDeltaEvent
response.mcp_call.arguments_doneOpenAI.ResponseMCPCallArgumentsDoneEvent
response.mcp_call.completedOpenAI.ResponseMCPCallCompletedEvent
response.mcp_call.failedOpenAI.ResponseMCPCallFailedEvent
response.mcp_call.in_progressOpenAI.ResponseMCPCallInProgressEvent
response.mcp_list_tools.completedOpenAI.ResponseMCPListToolsCompletedEvent
response.mcp_list_tools.failedOpenAI.ResponseMCPListToolsFailedEvent
response.mcp_list_tools.in_progressOpenAI.ResponseMCPListToolsInProgressEvent
response.queuedOpenAI.ResponseQueuedEvent
response.reasoning.deltaOpenAI.ResponseReasoningDeltaEvent
response.reasoning.doneOpenAI.ResponseReasoningDoneEvent
response.reasoning_summary.deltaOpenAI.ResponseReasoningSummaryDeltaEvent
response.reasoning_summary.doneOpenAI.ResponseReasoningSummaryDoneEvent
response.code_interpreter_call_code.deltaOpenAI.ResponseCodeInterpreterCallCodeDeltaEvent
response.code_interpreter_call_code.doneOpenAI.ResponseCodeInterpreterCallCodeDoneEvent
response.code_interpreter_call.completedOpenAI.ResponseCodeInterpreterCallCompletedEvent
response.code_interpreter_call.in_progressOpenAI.ResponseCodeInterpreterCallInProgressEvent
response.code_interpreter_call.interpretingOpenAI.ResponseCodeInterpreterCallInterpretingEvent
NameTypeDescriptionRequiredDefault
sequence_numberintegerThe sequence number for this event.Yes
typeOpenAI.ResponseStreamEventTypeYes

OpenAI.ResponseStreamEventType

PropertyValue
Typestring
Valuesresponse.audio.delta
response.audio.done
response.audio_transcript.delta
response.audio_transcript.done
response.code_interpreter_call_code.delta
response.code_interpreter_call_code.done
response.code_interpreter_call.completed
response.code_interpreter_call.in_progress
response.code_interpreter_call.interpreting
response.completed
response.content_part.added
response.content_part.done
response.created
error
response.file_search_call.completed
response.file_search_call.in_progress
response.file_search_call.searching
response.function_call_arguments.delta
response.function_call_arguments.done
response.in_progress
response.failed
response.incomplete
response.output_item.added
response.output_item.done
response.refusal.delta
response.refusal.done
response.output_text.annotation.added
response.output_text.delta
response.output_text.done
response.reasoning_summary_part.added
response.reasoning_summary_part.done
response.reasoning_summary_text.delta
response.reasoning_summary_text.done
response.web_search_call.completed
response.web_search_call.in_progress
response.web_search_call.searching
response.image_generation_call.completed
response.image_generation_call.generating
response.image_generation_call.in_progress
response.image_generation_call.partial_image
response.mcp_call.arguments_delta
response.mcp_call.arguments_done
response.mcp_call.completed
response.mcp_call.failed
response.mcp_call.in_progress
response.mcp_list_tools.completed
response.mcp_list_tools.failed
response.mcp_list_tools.in_progress
response.queued
response.reasoning.delta
response.reasoning.done
response.reasoning_summary.delta
response.reasoning_summary.done

OpenAI.ResponseTextDeltaEvent

Emitted when there is an additional text delta.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the text delta was added to.Yes
deltastringThe text delta that was added.Yes
item_idstringThe ID of the output item that the text delta was added to.Yes
obfuscationstringA field of random characters introduced by stream obfuscation. Stream obfuscation is a mechanism that mitigates certain side-channel attacks.Yes
output_indexintegerThe index of the output item that the text delta was added to.Yes
typeenumThe type of the event. Always response.output_text.delta.
Possible values: response.output_text.delta
Yes

OpenAI.ResponseTextDoneEvent

Emitted when text content is finalized.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the text content is finalized.Yes
item_idstringThe ID of the output item that the text content is finalized.Yes
output_indexintegerThe index of the output item that the text content is finalized.Yes
textstringThe text content that is finalized.Yes
typeenumThe type of the event. Always response.output_text.done.
Possible values: response.output_text.done
Yes

OpenAI.ResponseTextFormatConfiguration

Discriminator for OpenAI.ResponseTextFormatConfiguration

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ResponseTextFormatConfigurationTypeAn object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.

The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Yes

OpenAI.ResponseTextFormatConfigurationJsonObject

NameTypeDescriptionRequiredDefault
typeenum
Possible values: json_object
Yes

OpenAI.ResponseTextFormatConfigurationJsonSchema

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
NameTypeDescriptionRequiredDefault
descriptionstringA description of what the response format is for, used by the model to
determine how to respond in the format.
No
namestringThe name of the response format. Must be a-z, A-Z, 0-9, or contain
underscores and dashes, with a maximum length of 64.
Yes
schemaOpenAI.ResponseFormatJsonSchemaSchemaThe schema for the response format, described as a JSON Schema object.
Learn how to build JSON schemas here.
Yes
strictbooleanWhether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide
.
NoFalse
typeenumThe type of response format being defined. Always json_schema.
Possible values: json_schema
Yes

OpenAI.ResponseTextFormatConfigurationText

NameTypeDescriptionRequiredDefault
typeenum
Possible values: text
Yes

OpenAI.ResponseTextFormatConfigurationType

An object specifying the format that the model must output. Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide. The default format is { "type": "text" } with no additional options. Not recommended for gpt-4o and newer models: Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.
PropertyValue
DescriptionAn object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide. The default format is { "type": "text" } with no additional options. Not recommended for gpt-4o and newer models: Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it. | | Type | string | | Values | text
json_schema
json_object |

OpenAI.ResponseUsage

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
NameTypeDescriptionRequiredDefault
input_tokensintegerThe number of input tokens.Yes
input_tokens_detailsobjectA detailed breakdown of the input tokens.Yes
└─ cached_tokensintegerThe number of tokens that were retrieved from the cache.
More on prompt caching.
No
output_tokensintegerThe number of output tokens.Yes
output_tokens_detailsobjectA detailed breakdown of the output tokens.Yes
└─ reasoning_tokensintegerThe number of reasoning tokens.No
total_tokensintegerThe total number of tokens used.Yes

OpenAI.ResponseWebSearchCallCompletedEvent

web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
item_idstringUnique ID for the output item associated with the web search call.Yes
output_indexintegerThe index of the output item that the web search call is associated with.Yes
typeenumThe type of the event. Always response.web_search_call.completed.
Possible values: response.web_search_call.completed
Yes

OpenAI.ResponseWebSearchCallInProgressEvent

web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
item_idstringUnique ID for the output item associated with the web search call.Yes
output_indexintegerThe index of the output item that the web search call is associated with.Yes
typeenumThe type of the event. Always response.web_search_call.in_progress.
Possible values: response.web_search_call.in_progress
Yes

OpenAI.ResponseWebSearchCallSearchingEvent

web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
item_idstringUnique ID for the output item associated with the web search call.Yes
output_indexintegerThe index of the output item that the web search call is associated with.Yes
typeenumThe type of the event. Always response.web_search_call.searching.
Possible values: response.web_search_call.searching
Yes

OpenAI.ResponsesAssistantMessageItemParam

A message parameter item with the assistant role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always assistant.
Possible values: assistant
Yes

OpenAI.ResponsesAssistantMessageItemResource

A message resource item with the assistant role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always assistant.
Possible values: assistant
Yes

OpenAI.ResponsesDeveloperMessageItemParam

A message parameter item with the developer role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always developer.
Possible values: developer
Yes

OpenAI.ResponsesDeveloperMessageItemResource

A message resource item with the developer role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always developer.
Possible values: developer
Yes

OpenAI.ResponsesMessageItemParam

A response message item, representing a role and content, as provided as client request parameters.

Discriminator for OpenAI.ResponsesMessageItemParam

This component uses the property role to discriminate between different types:
NameTypeDescriptionRequiredDefault
roleobjectThe collection of valid roles for responses message items.Yes
typeenumThe type of the responses item, which is always ‘message’.
Possible values: message
Yes

OpenAI.ResponsesMessageItemResource

A response message resource item, representing a role and content, as provided on service responses.

Discriminator for OpenAI.ResponsesMessageItemResource

This component uses the property role to discriminate between different types:
NameTypeDescriptionRequiredDefault
roleobjectThe collection of valid roles for responses message items.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenumThe type of the responses item, which is always ‘message’.
Possible values: message
Yes

OpenAI.ResponsesMessageRole

The collection of valid roles for responses message items.
PropertyValue
DescriptionThe collection of valid roles for responses message items.
Typestring
Valuessystem
developer
user
assistant

OpenAI.ResponsesSystemMessageItemParam

A message parameter item with the system role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always system.
Possible values: system
Yes

OpenAI.ResponsesSystemMessageItemResource

A message resource item with the system role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always system.
Possible values: system
Yes

OpenAI.ResponsesUserMessageItemParam

A message parameter item with the user role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always user.
Possible values: user
Yes

OpenAI.ResponsesUserMessageItemResource

A message resource item with the user role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always user.
Possible values: user
Yes

OpenAI.RunGraderRequest

NameTypeDescriptionRequiredDefault
graderobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.Yes
└─ calculate_outputstringA formula to calculate the output based on grader results.No
└─ evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ gradersobjectNo
└─ image_tagstringThe image tag to use for the python script.No
└─ inputarrayThe input text. This may include template strings.No
└─ modelstringThe model to use for the evaluation.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsThe sampling parameters for the model.No
└─ sourcestringThe source code of the python script.No
└─ typeenumThe object type, which is always multi.
Possible values: multi
No
itemThe dataset item provided to the grader. This will be used to populate
the item namespace. See the guide for more details.
No
model_samplestringThe model sample to be evaluated. This value will be used to populate
the sample namespace. See the guide for more details.
The output_json variable will be populated if the model sample is a
valid JSON string.
Yes

OpenAI.RunGraderResponse

NameTypeDescriptionRequiredDefault
metadataobjectYes
└─ errorsobjectNo
└─ formula_parse_errorbooleanNo
└─ invalid_variable_errorbooleanNo
└─ model_grader_parse_errorbooleanNo
└─ model_grader_refusal_errorbooleanNo
└─ model_grader_server_errorbooleanNo
└─ model_grader_server_error_detailsstringNo
└─ other_errorbooleanNo
└─ python_grader_runtime_errorbooleanNo
└─ python_grader_runtime_error_detailsstringNo
└─ python_grader_server_errorbooleanNo
└─ python_grader_server_error_typestringNo
└─ sample_parse_errorbooleanNo
└─ truncated_observation_errorbooleanNo
└─ unresponsive_reward_errorbooleanNo
└─ execution_timenumberNo
└─ namestringNo
└─ sampled_model_namestringNo
└─ scoresNo
└─ token_usageintegerNo
└─ typestringNo
model_grader_token_usage_per_modelYes
rewardnumberYes
sub_rewardsYes

OpenAI.StaticChunkingStrategy

NameTypeDescriptionRequiredDefault
chunk_overlap_tokensintegerThe number of tokens that overlap between chunks. The default value is 400.

Note that the overlap must not exceed half of max_chunk_size_tokens.
Yes
max_chunk_size_tokensintegerThe maximum number of tokens in each chunk. The default value is 800. The minimum value is 100 and the maximum value is 4096.Yes

OpenAI.StaticChunkingStrategyRequestParam

Customize your own chunking strategy by setting chunk size and chunk overlap.
NameTypeDescriptionRequiredDefault
staticOpenAI.StaticChunkingStrategyYes
typeenumAlways static.
Possible values: static
Yes

OpenAI.StaticChunkingStrategyResponseParam

NameTypeDescriptionRequiredDefault
staticOpenAI.StaticChunkingStrategyYes
typeenumAlways static.
Possible values: static
Yes

OpenAI.StopConfiguration

Not supported with latest reasoning models o3 and o4-mini. Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. This schema accepts one of the following types:
  • string
  • array

OpenAI.Tool

Discriminator for OpenAI.Tool

This component uses the property type to discriminate between different types:
Type ValueSchema
functionOpenAI.FunctionTool
file_searchOpenAI.FileSearchTool
computer_use_previewOpenAI.ComputerUsePreviewTool
web_search_previewOpenAI.WebSearchPreviewTool
code_interpreterOpenAI.CodeInterpreterTool
image_generationOpenAI.ImageGenTool
local_shellOpenAI.LocalShellTool
mcpOpenAI.MCPTool
NameTypeDescriptionRequiredDefault
typeOpenAI.ToolTypeA tool that can be used to generate a response.Yes

OpenAI.ToolChoiceObject

Discriminator for OpenAI.ToolChoiceObject

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response..Yes

OpenAI.ToolChoiceObjectCodeInterpreter

NameTypeDescriptionRequiredDefault
typeenum
Possible values: code_interpreter
Yes

OpenAI.ToolChoiceObjectComputer

NameTypeDescriptionRequiredDefault
typeenum
Possible values: computer_use_preview
Yes

OpenAI.ToolChoiceObjectFileSearch

NameTypeDescriptionRequiredDefault
typeenum
Possible values: file_search
Yes

OpenAI.ToolChoiceObjectFunction

Use this option to force the model to call a specific function.
NameTypeDescriptionRequiredDefault
namestringThe name of the function to call.Yes
typeenumFor function calling, the type is always function.
Possible values: function
Yes

OpenAI.ToolChoiceObjectImageGen

NameTypeDescriptionRequiredDefault
typeenum
Possible values: image_generation
Yes

OpenAI.ToolChoiceObjectMCP

Use this option to force the model to call a specific tool on a remote MCP server.
NameTypeDescriptionRequiredDefault
namestringThe name of the tool to call on the server.No
server_labelstringThe label of the MCP server to use.Yes
typeenumFor MCP tools, the type is always mcp.
Possible values: mcp
Yes

OpenAI.ToolChoiceObjectType

Indicates that the model should use a built-in tool to generate a response.
PropertyValue
DescriptionIndicates that the model should use a built-in tool to generate a response.
Typestring
Valuesfile_search
function
computer_use_preview
web_search_preview
image_generation
code_interpreter
mcp

OpenAI.ToolChoiceObjectWebSearch

web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
typeenum
Possible values: web_search_preview
Yes

OpenAI.ToolChoiceOptions

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
PropertyValue
DescriptionControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
Typestring
Valuesnone
auto
required

OpenAI.ToolType

A tool that can be used to generate a response.
PropertyValue
DescriptionA tool that can be used to generate a response.
Typestring
Valuesfile_search
function
computer_use_preview
web_search_preview
mcp
code_interpreter
image_generation
local_shell

OpenAI.TopLogProb

The top log probability of a token.
NameTypeDescriptionRequiredDefault
bytesarrayYes
logprobnumberYes
tokenstringYes

OpenAI.UpdateVectorStoreFileAttributesRequest

NameTypeDescriptionRequiredDefault
attributesobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
Yes

OpenAI.UpdateVectorStoreRequest

NameTypeDescriptionRequiredDefault
expires_afterobjectThe expiration policy for a vector store.No
└─ anchorenumAnchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.
Possible values: last_active_at
No
└─ daysintegerThe number of days after the anchor time that the vector store will expire.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the vector store.No

OpenAI.ValidateGraderRequest

NameTypeDescriptionRequiredDefault
graderobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.Yes
└─ calculate_outputstringA formula to calculate the output based on grader results.No
└─ evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ gradersobjectNo
└─ image_tagstringThe image tag to use for the python script.No
└─ inputarrayThe input text. This may include template strings.No
└─ modelstringThe model to use for the evaluation.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsThe sampling parameters for the model.No
└─ sourcestringThe source code of the python script.No
└─ typeenumThe object type, which is always multi.
Possible values: multi
No

OpenAI.ValidateGraderResponse

NameTypeDescriptionRequiredDefault
graderobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.No
└─ calculate_outputstringA formula to calculate the output based on grader results.No
└─ evaluation_metricenumThe evaluation metric to use. One of fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, or rouge_l.
Possible values: fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ gradersobjectNo
└─ image_tagstringThe image tag to use for the python script.No
└─ inputarrayThe input text. This may include template strings.No
└─ modelstringThe model to use for the evaluation.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsThe sampling parameters for the model.No
└─ sourcestringThe source code of the python script.No
└─ typeenumThe object type, which is always multi.
Possible values: multi
No

OpenAI.VectorStoreExpirationAfter

The expiration policy for a vector store.
NameTypeDescriptionRequiredDefault
anchorenumAnchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.
Possible values: last_active_at
Yes
daysintegerThe number of days after the anchor time that the vector store will expire.Yes

OpenAI.VectorStoreFileAttributes

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. Type: object

OpenAI.VectorStoreFileBatchObject

A batch of files attached to a vector store.
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the vector store files batch was created.Yes
file_countsobjectYes
└─ cancelledintegerThe number of files that where cancelled.No
└─ completedintegerThe number of files that have been processed.No
└─ failedintegerThe number of files that have failed to process.No
└─ in_progressintegerThe number of files that are currently being processed.No
└─ totalintegerThe total number of files.No
idstringThe identifier, which can be referenced in API endpoints.Yes
objectenumThe object type, which is always vector_store.file_batch.
Possible values: vector_store.files_batch
Yes
statusenumThe status of the vector store files batch, which can be either in_progress, completed, cancelled or failed.
Possible values: in_progress, completed, cancelled, failed
Yes
vector_store_idstringThe ID of the vector store that the File is attached to.Yes

OpenAI.VectorStoreFileObject

A list of files attached to a vector store.
NameTypeDescriptionRequiredDefault
attributesobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard. Keys are strings
with a maximum length of 64 characters. Values are strings with a maximum
length of 512 characters, booleans, or numbers.
No
chunking_strategyobjectNo
└─ typeenum
Possible values: static, other
No
created_atintegerThe Unix timestamp (in seconds) for when the vector store file was created.Yes
idstringThe identifier, which can be referenced in API endpoints.Yes
last_errorobjectThe last error associated with this vector store file. Will be null if there are no errors.Yes
└─ codeenumOne of server_error or rate_limit_exceeded.
Possible values: server_error, unsupported_file, invalid_file
No
└─ messagestringA human-readable description of the error.No
objectenumThe object type, which is always vector_store.file.
Possible values: vector_store.file
Yes
statusenumThe status of the vector store file, which can be either in_progress, completed, cancelled, or failed. The status completed indicates that the vector store file is ready for use.
Possible values: in_progress, completed, cancelled, failed
Yes
usage_bytesintegerThe total vector store usage in bytes. Note that this may be different from the original file size.Yes
vector_store_idstringThe ID of the vector store that the File is attached to.Yes

OpenAI.VectorStoreObject

A vector store is a collection of processed files can be used by the file_search tool.
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the vector store was created.Yes
expires_afterOpenAI.VectorStoreExpirationAfterThe expiration policy for a vector store.No
expires_atintegerThe Unix timestamp (in seconds) for when the vector store will expire.No
file_countsobjectYes
└─ cancelledintegerThe number of files that were cancelled.No
└─ completedintegerThe number of files that have been successfully processed.No
└─ failedintegerThe number of files that have failed to process.No
└─ in_progressintegerThe number of files that are currently being processed.No
└─ totalintegerThe total number of files.No
idstringThe identifier, which can be referenced in API endpoints.Yes
last_active_atintegerThe Unix timestamp (in seconds) for when the vector store was last active.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
namestringThe name of the vector store.Yes
objectenumThe object type, which is always vector_store.
Possible values: vector_store
Yes
statusenumThe status of the vector store, which can be either expired, in_progress, or completed. A status of completed indicates that the vector store is ready for use.
Possible values: expired, in_progress, completed
Yes
usage_bytesintegerThe total number of bytes used by the files in the vector store.Yes

OpenAI.VoiceIdsShared

PropertyValue
Typestring
Valuesalloy
ash
ballad
coral
echo
fable
onyx
nova
sage
shimmer
verse

OpenAI.WebSearchAction

Discriminator for OpenAI.WebSearchAction

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.WebSearchActionTypeYes

OpenAI.WebSearchActionFind

Action type “find”: Searches for a pattern within a loaded page.
NameTypeDescriptionRequiredDefault
patternstringThe pattern or text to search for within the page.Yes
typeenumThe action type.
Possible values: find
Yes
urlstringThe URL of the page searched for the pattern.Yes

OpenAI.WebSearchActionOpenPage

Action type “open_page” - Opens a specific URL from search results.
NameTypeDescriptionRequiredDefault
typeenumThe action type.
Possible values: open_page
Yes
urlstringThe URL opened by the model.Yes

OpenAI.WebSearchActionSearch

Action type “search” - Performs a web search query.
NameTypeDescriptionRequiredDefault
querystringThe search query.Yes
typeenumThe action type.
Possible values: search
Yes

OpenAI.WebSearchActionType

PropertyValue
Typestring
Valuessearch
open_page
find

OpenAI.WebSearchPreviewTool

web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
search_context_sizeenumHigh level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
Possible values: low, medium, high
No
typeenumThe type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
Possible values: web_search_preview
Yes
user_locationobjectNo
└─ typeOpenAI.LocationTypeNo

OpenAI.WebSearchToolCallItemParam

web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
actionobjectYes
└─ typeOpenAI.WebSearchActionTypeNo
typeenum
Possible values: web_search_call
Yes

OpenAI.WebSearchToolCallItemResource

web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
actionobjectYes
└─ typeOpenAI.WebSearchActionTypeNo
statusenumThe status of the web search tool call.
Possible values: in_progress, searching, completed, failed
Yes
typeenum
Possible values: web_search_call
Yes

PineconeChatDataSource

NameTypeDescriptionRequiredDefault
parametersobjectThe parameter information to control the use of the Pinecone data source.Yes
└─ allow_partial_resultbooleanIf set to true, the system will allow partial search results to be used and the request will fail if all
partial queries fail. If not specified or specified as false, the request will fail if any search query fails.
NoFalse
└─ authenticationobjectNo
└─ keystringNo
└─ typeenum
Possible values: api_key
No
└─ embedding_dependencyobjectA representation of a data vectorization source usable as an embedding resource with a data source.No
└─ typeAzureChatDataSourceVectorizationSourceTypeThe differentiating identifier for the concrete vectorization source.No
└─ environmentstringThe environment name to use with Pinecone.No
└─ fields_mappingobjectField mappings to apply to data used by the Pinecone data source.
Note that content field mappings are required for Pinecone.
No
└─ content_fieldsarrayNo
└─ content_fields_separatorstringNo
└─ filepath_fieldstringNo
└─ title_fieldstringNo
└─ url_fieldstringNo
└─ in_scopebooleanWhether queries should be restricted to use of the indexed data.No
└─ include_contextsarrayThe output context properties to include on the response.
By default, citations and intent will be requested.
No[‘citations’, ‘intent’]
└─ index_namestringThe name of the Pinecone database index to use.No
└─ max_search_queriesintegerThe maximum number of rewritten queries that should be sent to the search provider for a single user message.
By default, the system will make an automatic determination.
No
└─ strictnessintegerThe configured strictness of the search relevance filtering.
Higher strictness will increase precision but lower recall of the answer.
No
└─ top_n_documentsintegerThe configured number of documents to feature in the query.No
typeenumThe discriminated type identifier, which is always ‘pinecone’.
Possible values: pinecone
Yes

ResponseFormatJSONSchemaRequest

NameTypeDescriptionRequiredDefault
json_schemaobjectJSON Schema for the response formatYes
typeenumType of response format
Possible values: json_schema
Yes

ResponseModalities

Output types that you would like the model to generate. Most models are capable of generating text, which is the default: ["text"] The gpt-4o-audio-preview model can also be used to generate audio. To request that this model generate both text and audio responses, you can use: ["text", "audio"] Array of: string