Skip to main content

Microsoft Foundry Project REST reference

API Version: 2025-11-15-preview

Agents - create agent

POST {endpoint}/agents?api-version=2025-11-15-preview
Creates the agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
definitionobjectYes
└─ kindAgentKindNo
└─ rai_configRaiConfigConfiguration for Responsible AI (RAI) content filtering and safety features.No
descriptionstringA human-readable description of the agent.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe unique name that identifies the agent. Name can be used to retrieve/update/delete the agent.
- Must start and end with alphanumeric characters,
- Can contain hyphens in the middle
- Must not exceed 63 characters.
Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - list agents

GET {endpoint}/agents?api-version=2025-11-15-preview
Returns the list of all agents.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
kindqueryNoFilter agents by kind. If not provided, all agents are returned.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - get agent

GET {endpoint}/agents/{agent_name}?api-version=2025-11-15-preview
Retrieves the agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - update agent

POST {endpoint}/agents/{agent_name}?api-version=2025-11-15-preview
Updates the agent by adding a new version if there are any changes to the agent definition. If no changes, returns the existing agent version.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
definitionobjectYes
└─ kindAgentKindNo
└─ rai_configRaiConfigConfiguration for Responsible AI (RAI) content filtering and safety features.No
descriptionstringA human-readable description of the agent.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - delete agent

DELETE {endpoint}/agents/{agent_name}?api-version=2025-11-15-preview
Deletes an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDeleteAgentResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - update agent from manifest

POST {endpoint}/agents/{agent_name}/import?api-version=2025-11-15-preview
Updates the agent from a manifest by adding a new version if there are any changes to the agent definition. If no changes, returns the existing agent version.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent to update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
descriptionstringA human-readable description of the agent.No
manifest_idstringThe manifest ID to import the agent version from.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
parameter_valuesobjectThe inputs to the manifest that will result in a fully materialized Agent.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - list agent container operations

GET {endpoint}/agents/{agent_name}/operations?api-version=2025-11-15-preview
List container operations for an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - get agent container operation

GET {endpoint}/agents/{agent_name}/operations/{operation_id}?api-version=2025-11-15-preview
Get the status of a container operation for an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
operation_idpathYesstringThe operation ID.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentContainerOperationObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse or ApiErrorResponse

Agents - create agent version

POST {endpoint}/agents/{agent_name}/versions?api-version=2025-11-15-preview
Create a new agent version.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe unique name that identifies the agent. Name can be used to retrieve/update/delete the agent.
- Must start and end with alphanumeric characters,
- Can contain hyphens in the middle
- Must not exceed 63 characters.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
definitionobjectYes
└─ kindAgentKindNo
└─ rai_configRaiConfigConfiguration for Responsible AI (RAI) content filtering and safety features.No
descriptionstringA human-readable description of the agent.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentVersionObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - list agent versions

GET {endpoint}/agents/{agent_name}/versions?api-version=2025-11-15-preview
Returns the list of versions of an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent to retrieve versions for.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - get agent version

GET {endpoint}/agents/{agent_name}/versions/{agent_version}?api-version=2025-11-15-preview
Retrieves a specific version of an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent to retrieve.
agent_versionpathYesstringThe version of the agent to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentVersionObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - delete agent version

DELETE {endpoint}/agents/{agent_name}/versions/{agent_version}?api-version=2025-11-15-preview
Deletes a specific version of an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent to delete.
agent_versionpathYesstringThe version of the agent to delete

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDeleteAgentVersionResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - get agent container

GET {endpoint}/agents/{agent_name}/versions/{agent_version}/containers/default?api-version=2025-11-15-preview
Get a container for a specific version of an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
agent_versionpathYesstringThe version of the agent.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentContainerObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - list agent version container operations

GET {endpoint}/agents/{agent_name}/versions/{agent_version}/containers/default/operations?api-version=2025-11-15-preview
List container operations for a specific version of an agent.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
agent_versionpathYesstringThe version of the agent.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - delete agent container

POST {endpoint}/agents/{agent_name}/versions/{agent_version}/containers/default:delete?api-version=2025-11-15-preview
Delete a container for a specific version of an agent. If the container doesn’t exist, the operation will be no-op. The operation is a long-running operation. Following the design guidelines for long-running operations in Azure REST APIs. https://github.com/microsoft/api-guidelines/blob/vNext/azure/ConsiderationsForServiceDesign.md#action-operations

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
agent_versionpathYesstringThe version of the agent.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 202 Description: The request has been accepted for processing, but processing has not yet completed.
Content-TypeTypeDescription
application/jsonAgentContainerOperationObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - start agent container

POST {endpoint}/agents/{agent_name}/versions/{agent_version}/containers/default:start?api-version=2025-11-15-preview
Start a container for a specific version of an agent. If the container is already running, the operation will be no-op. The operation is a long-running operation. Following the design guidelines for long-running operations in Azure REST APIs. https://github.com/microsoft/api-guidelines/blob/vNext/azure/ConsiderationsForServiceDesign.md#action-operations

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
agent_versionpathYesstringThe version of the agent.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
max_replicasintegerThe maximum number of replicas. Defaults to 1.No1
min_replicasintegerThe minimum number of replicas. Defaults to 1.No1

Responses

Status Code: 202 Description: The request has been accepted for processing, but processing has not yet completed.
Content-TypeTypeDescription
application/jsonAgentContainerOperationObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - stop agent container

POST {endpoint}/agents/{agent_name}/versions/{agent_version}/containers/default:stop?api-version=2025-11-15-preview
Stop a container for a specific version of an agent. If the container is not running, or already stopped, the operation will be no-op. The operation is a long-running operation. Following the design guidelines for long-running operations in Azure REST APIs. https://github.com/microsoft/api-guidelines/blob/vNext/azure/ConsiderationsForServiceDesign.md#action-operations

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
agent_versionpathYesstringThe version of the agent.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 202 Description: The request has been accepted for processing, but processing has not yet completed.
Content-TypeTypeDescription
application/jsonAgentContainerOperationObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse or ApiErrorResponse

Agents - update agent container

POST {endpoint}/agents/{agent_name}/versions/{agent_version}/containers/default:update?api-version=2025-11-15-preview
Update a container for a specific version of an agent. If the container is not running, the operation will be no-op. The operation is a long-running operation. Following the design guidelines for long-running operations in Azure REST APIs. https://github.com/microsoft/api-guidelines/blob/vNext/azure/ConsiderationsForServiceDesign.md#action-operations

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe name of the agent.
agent_versionpathYesstringThe version of the agent.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
max_replicasintegerThe maximum number of replicas.No
min_replicasintegerThe minimum number of replicas.No

Responses

Status Code: 202 Description: The request has been accepted for processing, but processing has not yet completed.
Content-TypeTypeDescription
application/jsonAgentContainerOperationObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - create agent version from manifest

POST {endpoint}/agents/{agent_name}/versions:import?api-version=2025-11-15-preview
Create a new agent version from a manifest.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
agent_namepathYesstringThe unique name that identifies the agent. Name can be used to retrieve/update/delete the agent.
- Must start and end with alphanumeric characters,
- Can contain hyphens in the middle
- Must not exceed 63 characters.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
descriptionstringA human-readable description of the agent.No
manifest_idstringThe manifest ID to import the agent version from.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
parameter_valuesobjectThe inputs to the manifest that will result in a fully materialized Agent.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentVersionObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Agents - create agent from manifest

POST {endpoint}/agents:import?api-version=2025-11-15-preview
Creates an agent from a manifest.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
descriptionstringA human-readable description of the agent.No
manifest_idstringThe manifest ID to import the agent version from.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe unique name that identifies the agent. Name can be used to retrieve/update/delete the agent.
- Must start and end with alphanumeric characters,
- Can contain hyphens in the middle
- Must not exceed 63 characters.
Yes
parameter_valuesobjectThe inputs to the manifest that will result in a fully materialized Agent.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAgentObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Connections - list

GET {endpoint}/connections?api-version=2025-11-15-preview
List all connections in the project, without populating connection credentials

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
connectionTypequeryNoList connections of this specific type
defaultConnectionqueryNobooleanList connections that are default connections
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedConnection
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Connections - get

GET {endpoint}/connections/{name}?api-version=2025-11-15-preview
Get a connection by name, without populating connection credentials

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe friendly name of the connection, provided by the user.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonConnection
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Connections - get with credentials

POST {endpoint}/connections/{name}/getConnectionWithCredentials?api-version=2025-11-15-preview
Get a connection by name, with its connection credentials

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe friendly name of the connection, provided by the user.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonConnection
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Datasets - list latest

GET {endpoint}/datasets?api-version=2025-11-15-preview
List the latest version of each DatasetVersion

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedDatasetVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Datasets - list versions

GET {endpoint}/datasets/{name}/versions?api-version=2025-11-15-preview
List all versions of the given DatasetVersion

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedDatasetVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Datasets - get version

GET {endpoint}/datasets/{name}/versions/{version}?api-version=2025-11-15-preview
Get the specific version of the DatasetVersion. The service returns 404 Not Found error if the DatasetVersion does not exist.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe specific version id of the DatasetVersion to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDatasetVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Datasets - delete version

DELETE {endpoint}/datasets/{name}/versions/{version}?api-version=2025-11-15-preview
Delete the specific version of the DatasetVersion. The service returns 204 No Content if the DatasetVersion was deleted successfully or if the DatasetVersion does not exist.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe version of the DatasetVersion to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 204 Description: There is no content to send for this request, but the headers may be useful. Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Datasets - create or update version

PATCH {endpoint}/datasets/{name}/versions/{version}?api-version=2025-11-15-preview
Create a new or update an existing DatasetVersion with the given version id

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe specific version id of the DatasetVersion to create or update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/merge-patch+json
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
typeobjectEnum to determine the type of data.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDatasetVersion
Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonDatasetVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Datasets - get credentials

POST {endpoint}/datasets/{name}/versions/{version}/credentials?api-version=2025-11-15-preview
Get the SAS credential to access the storage account associated with a Dataset version.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe specific version id of the DatasetVersion to operate on.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonAssetCredentialResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Datasets - start pending upload version

POST {endpoint}/datasets/{name}/versions/{version}/startPendingUpload?api-version=2025-11-15-preview
Start a new or get an existing pending upload of a dataset for a specific version.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe specific version id of the DatasetVersion to operate on.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
connectionNamestringAzure Storage Account connection name to use for generating temporary SAS tokenNo
pendingUploadIdstringIf PendingUploadId is not provided, a random GUID will be used.No
pendingUploadTypeenumBlobReference is the only supported type.
Possible values: BlobReference
Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPendingUploadResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Deployments - list

GET {endpoint}/deployments?api-version=2025-11-15-preview
List all deployed models in the project

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
modelPublisherqueryNostringModel publisher to filter models by
modelNamequeryNostringModel name (the publisher specific name) to filter models by
deploymentTypequeryNoType of deployment to filter list by
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedDeployment
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Deployments - get

GET {endpoint}/deployments/{name}?api-version=2025-11-15-preview
Get a deployed model.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringName of the deployment
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDeployment
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation rules - list

GET {endpoint}/evaluationrules?api-version=2025-11-15-preview
List all evaluation rules.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
actionTypequeryNoFilter by the type of evaluation rule.
agentNamequeryNostringFilter by the agent name.
enabledqueryNobooleanFilter by the enabled status.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedEvaluationRule
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation rules - get

GET {endpoint}/evaluationrules/{id}?api-version=2025-11-15-preview
Get an evaluation rule.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringUnique identifier for the evaluation rule.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvaluationRule
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation rules - delete

DELETE {endpoint}/evaluationrules/{id}?api-version=2025-11-15-preview
Delete an evaluation rule.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringUnique identifier for the evaluation rule.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 204 Description: There is no content to send for this request, but the headers may be useful. Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation rules - create or update

PUT {endpoint}/evaluationrules/{id}?api-version=2025-11-15-preview
Create or update an evaluation rule.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringUnique identifier for the evaluation rule.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
actionobjectEvaluation action model.Yes
└─ typeEvaluationRuleActionTypeType of the evaluation action.No
descriptionstringDescription for the evaluation rule.No
displayNamestringDisplay Name for the evaluation rule.No
enabledbooleanIndicates whether the evaluation rule is enabled. Default is true.Yes
eventTypeobjectType of the evaluation rule event.Yes
filterobjectEvaluation filter model.No
└─ agentNamestringFilter by agent name.No
idstringUnique identifier for the evaluation rule.Yes
systemDataobjectSystem metadata for the evaluation rule.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvaluationRule
Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonEvaluationRule
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation taxonomies - list

GET {endpoint}/evaluationtaxonomies?api-version=2025-11-15-preview
List evaluation taxonomies

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
inputNamequeryNostringFilter by the evaluation input name.
inputTypequeryNostringFilter by taxonomy input type.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedEvaluationTaxonomy
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation taxonomies - get

GET {endpoint}/evaluationtaxonomies/{name}?api-version=2025-11-15-preview
Get an evaluation run by name.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvaluationTaxonomy
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation taxonomies - delete

DELETE {endpoint}/evaluationtaxonomies/{name}?api-version=2025-11-15-preview
Delete an evaluation taxonomy by name.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 204 Description: There is no content to send for this request, but the headers may be useful. Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation taxonomies - create

PUT {endpoint}/evaluationtaxonomies/{name}?api-version=2025-11-15-preview
Create an evaluation taxonomy.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the evaluation taxonomy.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
propertiesobjectAdditional properties for the evaluation taxonomy.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
taxonomyCategoriesarrayList of taxonomy categories.No
taxonomyInputobjectInput configuration for the evaluation taxonomy.Yes
└─ typeEvaluationTaxonomyInputTypeInput type of the evaluation taxonomy.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvaluationTaxonomy
Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonEvaluationTaxonomy
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluation taxonomies - update

PATCH {endpoint}/evaluationtaxonomies/{name}?api-version=2025-11-15-preview
Update an evaluation taxonomy.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the evaluation taxonomy.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
propertiesobjectAdditional properties for the evaluation taxonomy.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
taxonomyCategoriesarrayList of taxonomy categories.No
taxonomyInputobjectInput configuration for the evaluation taxonomy.No
└─ typeEvaluationTaxonomyInputTypeInput type of the evaluation taxonomy.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvaluationTaxonomy
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluators - list latest versions

GET {endpoint}/evaluators?api-version=2025-11-15-preview
List the latest version of each evaluator

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
typequeryNoFilter evaluators by type. Possible values: ‘all’, ‘custom’, ‘builtin’.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedEvaluatorVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluators - list versions

GET {endpoint}/evaluators/{name}/versions?api-version=2025-11-15-preview
List all versions of the given evaluator

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
typequeryNoFilter evaluators by type. Possible values: ‘all’, ‘custom’, ‘builtin’.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedEvaluatorVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluators - create version

POST {endpoint}/evaluators/{name}/versions?api-version=2025-11-15-preview
Create a new EvaluatorVersion with auto incremented version id

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
categoriesarrayThe categories of the evaluatorYes
definitionobjectBase evaluator configuration with discriminatorYes
└─ data_schemaThe JSON schema (Draft 2020-12) for the evaluator’s input data. This includes parameters like type, properties, required.No
└─ init_parametersThe JSON schema (Draft 2020-12) for the evaluator’s input parameters. This includes parameters like type, properties, required.No
└─ metricsobjectList of output metrics produced by this evaluatorNo
└─ typeEvaluatorDefinitionTypeThe type of evaluator definitionNo
descriptionstringThe asset description text.No
display_namestringDisplay Name for evaluator. It helps to find the evaluator easily in Foundry. It does not need to be unique.No
evaluator_typeobjectThe type of the evaluatorYes
metadataobjectMetadata about the evaluatorNo
tagsobjectTag dictionary. Tags can be added, removed, and updated.No

Responses

Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonEvaluatorVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluators - get version

GET {endpoint}/evaluators/{name}/versions/{version}?api-version=2025-11-15-preview
Get the specific version of the EvaluatorVersion. The service returns 404 Not Found error if the EvaluatorVersion does not exist.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe specific version id of the EvaluatorVersion to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvaluatorVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluators - delete version

DELETE {endpoint}/evaluators/{name}/versions/{version}?api-version=2025-11-15-preview
Delete the specific version of the EvaluatorVersion. The service returns 204 No Content if the EvaluatorVersion was deleted successfully or if the EvaluatorVersion does not exist.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe version of the EvaluatorVersion to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 204 Description: There is no content to send for this request, but the headers may be useful. Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Evaluators - update version

PATCH {endpoint}/evaluators/{name}/versions/{version}?api-version=2025-11-15-preview
Update an existing EvaluatorVersion with the given version id

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe version of the EvaluatorVersion to update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
categoriesarrayThe categories of the evaluatorNo
descriptionstringThe asset description text.No
display_namestringDisplay Name for evaluator. It helps to find the evaluator easily in Foundry. It does not need to be unique.No
metadataobjectMetadata about the evaluatorNo
tagsobjectTag dictionary. Tags can be added, removed, and updated.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvaluatorVersion
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Indexes - list latest

GET {endpoint}/indexes?api-version=2025-11-15-preview
List the latest version of each Index

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedIndex
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Indexes - list versions

GET {endpoint}/indexes/{name}/versions?api-version=2025-11-15-preview
List all versions of the given Index

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedIndex
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Indexes - get version

GET {endpoint}/indexes/{name}/versions/{version}?api-version=2025-11-15-preview
Get the specific version of the Index. The service returns 404 Not Found error if the Index does not exist.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe specific version id of the Index to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonIndex
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Indexes - delete version

DELETE {endpoint}/indexes/{name}/versions/{version}?api-version=2025-11-15-preview
Delete the specific version of the Index. The service returns 204 No Content if the Index was deleted successfully or if the Index does not exist.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe version of the Index to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 204 Description: There is no content to send for this request, but the headers may be useful. Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Indexes - create or update version

PATCH {endpoint}/indexes/{name}/versions/{version}?api-version=2025-11-15-preview
Create a new or update an existing Index with the given version id

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the resource
versionpathYesstringThe specific version id of the Index to create or update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/merge-patch+json
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
typeobjectYes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonIndex
Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonIndex
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Insights - generate

POST {endpoint}/insights?api-version=2025-11-15-preview
Generate Insights

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
Repeatability-Request-IDheaderNostringUnique, client-generated identifier for ensuring request idempotency. Use the same ID for retries to prevent duplicate evaluations.
Repeatability-First-SentheaderNostringTimestamp indicating when this request was first initiated. Used in conjunction with repeatability-request-id for idempotency control.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
displayNamestringUser friendly display name for the insight.Yes
idstringThe unique identifier for the insights report.Yes
metadataobjectMetadata about the insights.Yes
└─ completedAtstringThe timestamp when the insights were completed.No
└─ createdAtstringThe timestamp when the insights were created.No
requestobjectThe request of the insights report.Yes
└─ typeInsightTypeThe type of request.No
resultobjectThe result of the insights.No
└─ typeInsightTypeThe type of insights result.No
stateobjectEnum describing allowed operation states.Yes

Responses

Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonInsight
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Insights - list

GET {endpoint}/insights?api-version=2025-11-15-preview
List all insights in reverse chronological order (newest first).

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
typequeryNoFilter by the type of analysis.
evalIdqueryNostringFilter by the evaluation ID.
runIdqueryNostringFilter by the evaluation run ID.
agentNamequeryNostringFilter by the agent name.
includeCoordinatesqueryNobooleanWhether to include coordinates for visualization in the response. Defaults to false.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedInsight
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Insights - get

GET {endpoint}/insights/{id}?api-version=2025-11-15-preview
Get a specific insight by Id.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringThe unique identifier for the insights report.
includeCoordinatesqueryNobooleanWhether to include coordinates for visualization in the response. Defaults to false.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonInsight
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Create memory store

POST {endpoint}/memory_stores?api-version=2025-11-15-preview
Create a memory store.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
definitionobjectBase definition for memory store configurations.Yes
└─ kindMemoryStoreKindThe kind of the memory store.No
descriptionstringA human-readable description of the memory store.No
metadataobjectArbitrary key-value metadata to associate with the memory store.No
namestringThe name of the memory store.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonMemoryStoreObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List memory stores

GET {endpoint}/memory_stores?api-version=2025-11-15-preview
List all memory stores.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Update memory store

POST {endpoint}/memory_stores/{name}?api-version=2025-11-15-preview
Update a memory store.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the memory store to update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
descriptionstringA human-readable description of the memory store.No
metadataobjectArbitrary key-value metadata to associate with the memory store.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonMemoryStoreObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Get memory store

GET {endpoint}/memory_stores/{name}?api-version=2025-11-15-preview
Retrieve a memory store.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the memory store to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonMemoryStoreObject
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Delete memory store

DELETE {endpoint}/memory_stores/{name}?api-version=2025-11-15-preview
Delete a memory store.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the memory store to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDeleteMemoryStoreResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Get update result

GET {endpoint}/memory_stores/{name}/updates/{update_id}?api-version=2025-11-15-preview
Get memory store update result.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the memory store.
update_idpathYesstringThe ID of the memory update operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonMemoryStoreUpdateResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Delete scope memories

POST {endpoint}/memory_stores/{name}:delete_scope?api-version=2025-11-15-preview
Delete all memories associated with a specific scope from a memory store.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the memory store.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
scopestringThe namespace that logically groups and isolates memories to delete, such as a user ID.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonMemoryStoreDeleteScopeResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Search memories

POST {endpoint}/memory_stores/{name}:search_memories?api-version=2025-11-15-preview
Search for relevant memories from a memory store based on conversation context.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the memory store to search.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
itemsarrayItems for which to search for relevant memories.No
optionsobjectMemory search options.No
└─ max_memoriesintegerMaximum number of memory items to return.No
previous_search_idstringThe unique ID of the previous search request, enabling incremental memory search from where the last operation left off.No
scopestringThe namespace that logically groups and isolates memories, such as a user ID.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonMemoryStoreSearchResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Update memories

POST {endpoint}/memory_stores/{name}:update_memories?api-version=2025-11-15-preview
Update memory store with conversation memories.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringThe name of the memory store to update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
itemsarrayConversation items from which to extract memories.No
previous_update_idstringThe unique ID of the previous update request, enabling incremental memory updates from where the last operation left off.No
scopestringThe namespace that logically groups and isolates memories, such as a user ID.Yes
update_delayintegerTimeout period before processing the memory update in seconds.
If a new update request is received during this period, it will cancel the current request and reset the timeout.
Set to 0 to immediately trigger the update without delay.
Defaults to 300 (5 minutes).
No300

Responses

Status Code: 202 Description: The request has been accepted for processing, but processing has not yet completed.
Content-TypeTypeDescription
application/jsonMemoryStoreUpdateResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Create conversation

POST {endpoint}/openai/conversations?api-version=2025-11-15-preview
Create a conversation.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
itemsarrayInitial items to include the conversation context.
You may add up to 20 items at a time.
No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ConversationResource
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List conversations

GET {endpoint}/openai/conversations?api-version=2025-11-15-preview
Returns the list of all conversations.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.
agent_namequeryNostringFilter by agent name. If provided, only items associated with the specified agent will be returned.
agent_idqueryNostringFilter by agent ID in the format name:version. If provided, only items associated with the specified agent ID will be returned.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Update conversation

POST {endpoint}/openai/conversations/{conversation_id}?api-version=2025-11-15-preview
Update a conversation.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
conversation_idpathYesstringThe id of the conversation to update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ConversationResource
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Get conversation

GET {endpoint}/openai/conversations/{conversation_id}?api-version=2025-11-15-preview
Retrieves a conversation.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
conversation_idpathYesstringThe id of the conversation to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ConversationResource
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Delete conversation

DELETE {endpoint}/openai/conversations/{conversation_id}?api-version=2025-11-15-preview
Deletes a conversation.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
conversation_idpathYesstringThe id of the conversation to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.DeletedConversationResource
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Create conversation items

POST {endpoint}/openai/conversations/{conversation_id}/items?api-version=2025-11-15-preview
Create items in a conversation with the given ID.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
conversation_idpathYesstringThe id of the conversation on which the item needs to be created.
includequeryNoarrayAdditional fields to include in the response.
See the include parameter for listing Conversation items for more information.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
itemsarrayThe items to add to the conversation. You may add up to 20 items at a time.Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ConversationItemList
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List conversation items

GET {endpoint}/openai/conversations/{conversation_id}/items?api-version=2025-11-15-preview
List all items for a conversation with the given ID.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
conversation_idpathYesstringThe id of the conversation on which the items needs to be listed.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.
item_typequeryNoFilter by item type. If provided, only items of the specified type will be returned.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Get conversation item

GET {endpoint}/openai/conversations/{conversation_id}/items/{item_id}?api-version=2025-11-15-preview
Get a single item from a conversation with the given IDs.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
conversation_idpathYesstringThe ID of the conversation that contains the item.
item_idpathYesstringThe id of the conversation item to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ItemResource
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Delete conversation item

DELETE {endpoint}/openai/conversations/{conversation_id}/items/{item_id}?api-version=2025-11-15-preview
Delete an item from a conversation with the given IDs.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
conversation_idpathYesstringThe id of the conversation on which the item needs to be deleted from.
item_idpathYesstringThe id of the conversation item to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ConversationResource
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - list evals

GET {endpoint}/openai/evals?api-version=2025-11-15-preview
List all evaluations List evaluations for a project.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
afterqueryNostringIdentifier for the last run from the previous pagination request.
limitqueryNoNumber of runs to retrieve.
orderqueryNostring
Possible values: asc, desc
Sort order for runs by timestamp. Use asc for ascending order or desc for descending order. Defaults to asc.
order_byqueryNostring
Possible values: created_at, updated_at
Evals can be ordered by creation time or last updated time. Use
created_at for creation time or updated_at for last updated time.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - create eval

POST {endpoint}/openai/evals?api-version=2025-11-15-preview
Create evaluation Create the structure of an evaluation that can be used to test a model’s performance. An evaluation is a set of testing criteria and the config for a data source, which dictates the schema of the data used in the evaluation. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
data_source_configobjectA CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs.
This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
Yes
└─ include_sample_schemabooleanWhether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source)No
└─ item_schemaobjectThe json schema for each row in the data source.No
└─ metadataobjectMetadata filters for the stored completions data source.No
└─ scenarioenumData schema scenario.
Possible values: red_team, responses, traces
No
└─ typeenumThe object type, which is always label_model.
Possible values: azure_ai_source
No
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the evaluation.No
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No
testing_criteriaarrayA list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model’s output, use the sample namespace (ie, {{sample.output_text}}).Yes

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEval
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - delete eval

DELETE {endpoint}/openai/evals/{eval_id}?api-version=2025-11-15-preview
Delete an evaluation Delete an evaluation.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDeleteEvalResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - get eval

GET {endpoint}/openai/evals/{eval_id}?api-version=2025-11-15-preview
Get an evaluation Get an evaluation by ID.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEval
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - update eval

POST {endpoint}/openai/evals/{eval_id}?api-version=2025-11-15-preview
Update an evaluation Update certain properties of an evaluation.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to update.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringNo
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEval
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - list runs

GET {endpoint}/openai/evals/{eval_id}/runs?api-version=2025-11-15-preview
Get a list of runs for an evaluation Get a list of runs for an evaluation.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to retrieve runs for.
afterqueryNostringIdentifier for the last run from the previous pagination request.
limitqueryNoNumber of runs to retrieve.
orderqueryNostring
Possible values: asc, desc
Sort order for runs by timestamp. Use asc for ascending order or desc for descending order. Defaults to asc.
statusqueryNostring
Possible values: queued, in_progress, completed, canceled, failed . Filter runs by status. One of queued, in_progress, failed, completed, canceled.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - create eval run

POST {endpoint}/openai/evals/{eval_id}/runs?api-version=2025-11-15-preview
Create evaluation run

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to create a run for.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
data_sourceobjectA JsonlRunDataSource object with that specifies a JSONL file that matches the evalYes
└─ input_messagesOpenAI.CreateEvalResponsesRunDataSourceInputMessagesTemplate or OpenAI.CreateEvalResponsesRunDataSourceInputMessagesItemReferenceUsed when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.No
└─ item_generation_paramsRedTeamItemGenerationParamsThe parameters for item generation.No
└─ modelstringThe name of the model to use for generating completions (e.g. “o3-mini”).No
└─ sampling_paramsOpenAI.CreateEvalResponsesRunDataSourceSamplingParamsNo
└─ sourceOpenAI.EvalJsonlFileContentSource or OpenAI.EvalJsonlFileIdSource or OpenAI.EvalResponsesSourceDetermines what populates the item namespace in this run’s data source.No
└─ targetTargetThe target configuration for the evaluation.No
└─ typestringThe data source type discriminator.No
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the run.No
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvalRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - delete eval run

DELETE {endpoint}/openai/evals/{eval_id}/runs/{run_id}?api-version=2025-11-15-preview
Delete evaluation run Delete an eval run.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to delete the run from.
run_idpathYesstringThe ID of the run to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDeleteEvalRunResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - get eval run

GET {endpoint}/openai/evals/{eval_id}/runs/{run_id}?api-version=2025-11-15-preview
Get an evaluation run Get an evaluation run by ID.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to retrieve runs for.
run_idpathYesstringThe ID of the run to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvalRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - cancel eval run

POST {endpoint}/openai/evals/{eval_id}/runs/{run_id}?api-version=2025-11-15-preview
Cancel evaluation run Cancel an ongoing evaluation run.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation whose run you want to cancel.
run_idpathYesstringThe ID of the run to cancel.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvalRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - get eval run output items

GET {endpoint}/openai/evals/{eval_id}/runs/{run_id}/output_items?api-version=2025-11-15-preview
Get evaluation run output items Get a list of output items for an evaluation run.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstring
run_idpathYesstringThe ID of the run to retrieve output items for.
afterqueryNostringIdentifier for the last run from the previous pagination request.
limitqueryNoNumber of runs to retrieve.
orderqueryNostring
Possible values: asc, desc
Sort order for runs by timestamp. Use asc for ascending order or desc for descending order. Defaults to asc.
statusqueryNostring
Possible values: fail, pass
Filter output items by status. Use failed to filter by failed output
items or pass to filter by passed output items.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

OpenAI evals - get eval run output item

GET {endpoint}/openai/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}?api-version=2025-11-15-preview
Get an output item of an evaluation run Get an evaluation run output item by ID.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
eval_idpathYesstringThe ID of the evaluation to retrieve runs for.
run_idpathYesstringThe ID of the run to retrieve.
output_item_idpathYesstringThe ID of the output item to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonEvalRunOutputItem
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Create fine tuning job

POST {endpoint}/openai/fine-tuning/jobs?api-version=2025-11-15-preview
Creates a fine-tuning job which begins the process of creating a new model from a given dataset. Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete. Learn more about fine-tuning

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
hyperparametersobjectThe hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of method, and should be passed in under the method parameter.
No
└─ batch_sizeenum
Possible values: auto
No
└─ learning_rate_multiplierenum
Possible values: auto
No
└─ n_epochsenum
Possible values: auto
No
integrationsarrayA list of integrations to enable for your fine-tuning job.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
methodOpenAI.FineTuneMethodThe method used for fine-tuning.No
modelstring (see valid models below)The name of the model to fine-tune. You can select one of the
supported models.
Yes
seedintegerThe seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.
If a seed is not specified, one will be generated for you.
No
suffixstringA string of up to 64 characters that will be added to your fine-tuned model name.

For example, a suffix of “custom-model-name” would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel.
NoNone
training_filestringThe ID of an uploaded file that contains training data.



Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.

The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format.

See the fine-tuning guide for more details.
Yes
validation_filestringThe ID of an uploaded file that contains validation data.

If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed in
the fine-tuning results file.
The same data should not be present in both train and validation files.

Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.

See the fine-tuning guide for more details.
No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List paginated fine tuning jobs

GET {endpoint}/openai/fine-tuning/jobs?api-version=2025-11-15-preview
List your organization’s fine-tuning jobs

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
afterqueryNostringIdentifier for the last job from the previous pagination request.
limitqueryNointegerNumber of fine-tuning jobs to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListPaginatedFineTuningJobsResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Retrieve fine tuning job

GET {endpoint}/openai/fine-tuning/jobs/{fine_tuning_job_id}?api-version=2025-11-15-preview
Get info about a fine-tuning job. Learn more about fine-tuning

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Cancel fine tuning job

POST {endpoint}/openai/fine-tuning/jobs/{fine_tuning_job_id}/cancel?api-version=2025-11-15-preview
Immediately cancel a fine-tune job.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to cancel.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List fine tuning job checkpoints

GET {endpoint}/openai/fine-tuning/jobs/{fine_tuning_job_id}/checkpoints?api-version=2025-11-15-preview
List checkpoints for a fine-tuning job.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to get checkpoints for.
afterqueryNostringIdentifier for the last checkpoint ID from the previous pagination request.
limitqueryNointegerNumber of checkpoints to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListFineTuningJobCheckpointsResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List fine tuning job events

GET {endpoint}/openai/fine-tuning/jobs/{fine_tuning_job_id}/events?api-version=2025-11-15-preview
Get fine-grained status updates for a fine-tuning job.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to get events for.
afterqueryNostringIdentifier for the last event from the previous pagination request.
limitqueryNointegerNumber of events to retrieve.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.ListFineTuningJobEventsResponse
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Pause fine tuning job

POST {endpoint}/openai/fine-tuning/jobs/{fine_tuning_job_id}/pause?api-version=2025-11-15-preview
Pause a running fine-tune job.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to pause.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Resume fine tuning job

POST {endpoint}/openai/fine-tuning/jobs/{fine_tuning_job_id}/resume?api-version=2025-11-15-preview
Resume a paused fine-tune job.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
fine_tuning_job_idpathYesstringThe ID of the fine-tuning job to resume.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.FineTuningJob
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Create response - create response stream

POST {endpoint}/openai/responses?api-version=2025-11-15-preview
Creates a model response. Creates a model response (streaming response).

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryNostringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
agentAgentReferenceThe agent to use for generating the response.No
backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
conversationstring or objectNo
includearraySpecify additional output data to include in the model response. Currently
supported values are:
- code_interpreter_call.outputs: Includes the outputs of python code execution
in code interpreter tool call items.
- computer_call_output.output.image_url: Include image urls from the computer call output.
- file_search_call.results: Include the search results of
the file search tool call.
- message.input_image.image_url: Include image urls from the input message.
- message.output_text.logprobs: Include logprobs with assistant messages.
- reasoning.encrypted_content: Includes an encrypted version of reasoning
tokens in reasoning item outputs. This enables reasoning items to be used in
multi-turn conversations when using the Responses API statelessly (like
when the store parameter is set to false, or when an organization is
enrolled in the zero data retention program).
No
inputstring or arrayText, image, or file inputs to the model, used to generate a response.

Learn more:
- Text inputs and outputs
- Image inputs
- File inputs
- Managing conversation state
- Function calling
No
instructionsstringA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modelstringThe model deployment to use for the creation of this response.No
parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
promptOpenAI.PromptReference to a prompt template and its variables.
Learn more.
No
reasoningOpenAI.Reasoningo-series models only

Configuration options for reasoning models.
No
service_tierOpenAI.ServiceTierNote: service_tier is not applicable to Azure OpenAI.No
storebooleanWhether to store the generated model response for later retrieval via
API.
NoTrue
streambooleanIf set to true, the model response data will be streamed to the client
as it is generated using server-sent events.

for more information.
NoFalse
structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No1
textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like file search.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code.
No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No1
truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
userstringLearn more about safety best practices.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.Response
text/event-streamOpenAI.ResponseStreamEvent
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List responses

GET {endpoint}/openai/responses?api-version=2025-11-15-preview
Returns the list of all responses.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.
agent_namequeryNostringFilter by agent name. If provided, only items associated with the specified agent will be returned.
agent_idqueryNostringFilter by agent ID in the format name:version. If provided, only items associated with the specified agent ID will be returned.
conversation_idqueryNostringFilter by conversation ID. If provided, only responses associated with the specified conversation will be returned.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Get response - get response stream

GET {endpoint}/openai/responses/{response_id}?api-version=2025-11-15-preview
Retrieves a model response with the given ID. Retrieves a model response with the given ID (streaming response).

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryNostringThe API version to use for this operation.
response_idpathYesstring
include[]queryNoarray
streamqueryNoboolean
starting_afterqueryNointeger
acceptheaderNostring
Possible values: text/event-stream

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.Response
text/event-streamOpenAI.ResponseStreamEvent
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Delete response

DELETE {endpoint}/openai/responses/{response_id}?api-version=2025-11-15-preview
Deletes a model response.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
response_idpathYesstringThe ID of the response to delete.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonDeleteResponseResult
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Cancel response

POST {endpoint}/openai/responses/{response_id}/cancel?api-version=2025-11-15-preview
Cancels a model response.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
response_idpathYesstringThe ID of the response to cancel.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonOpenAI.Response
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

List input items

GET {endpoint}/openai/responses/{response_id}/input_items?api-version=2025-11-15-preview
Returns a list of input items for a given response.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
response_idpathYesstring
limitqueryNointegerA limit on the number of objects to be returned. Limit can range between 1 and 100, and the
default is 20.
orderqueryNostring
Possible values: asc, desc
Sort order by the created_at timestamp of the objects. asc for ascending order anddesc
for descending order.
afterqueryNostringA cursor for use in pagination. after is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include after=obj_foo in order to fetch the next page of the list.
beforequeryNostringA cursor for use in pagination. before is an object ID that defines your place in the list.
For instance, if you make a list request and receive 100 objects, ending with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the previous page of the list.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonobjectThe response data for a requested list of items.
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonApiErrorResponse

Redteams - list

GET {endpoint}/redTeams/runs?api-version=2025-11-15-preview
List a redteam by name.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedRedTeam
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Redteams - get

GET {endpoint}/redTeams/runs/{name}?api-version=2025-11-15-preview
Get a redteam by name.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
namepathYesstringIdentifier of the red team run.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonRedTeam
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Redteams - create

POST {endpoint}/redTeams/runs:run?api-version=2025-11-15-preview
Creates a redteam run.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
applicationScenariostringApplication scenario for the red team operation, to generate scenario specific attacks.No
attackStrategiesarrayList of attack strategies or nested lists of attack strategies.No
displayNamestringName of the red-team run.No
idstringIdentifier of the red team run.Yes
numTurnsintegerNumber of simulation rounds.No
propertiesobjectRed team’s properties. Unlike tags, properties are add-only. Once added, a property cannot be removed.No
riskCategoriesarrayList of risk categories to generate attack objectives for.No
simulationOnlybooleanSimulation-only or Simulation + Evaluation. Default false, if true the scan outputs conversation not evaluation result.NoFalse
statusstringStatus of the red-team. It is set by service and is read-only.No
tagsobjectRed team’s tags. Unlike properties, tags are fully mutable.No
targetobjectAbstract class for target configuration.Yes
└─ typestringType of the model configuration.No

Responses

Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonRedTeam
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Schedules - list

GET {endpoint}/schedules?api-version=2025-11-15-preview
List all schedules.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedSchedule
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Schedules - delete

DELETE {endpoint}/schedules/{id}?api-version=2025-11-15-preview
Delete a schedule.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringIdentifier of the schedule.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 204 Description: There is no content to send for this request, but the headers may be useful. Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Schedules - get

GET {endpoint}/schedules/{id}?api-version=2025-11-15-preview
Get a schedule by id.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringIdentifier of the schedule.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonSchedule
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Schedules - create or update

PUT {endpoint}/schedules/{id}?api-version=2025-11-15-preview
Create or update a schedule by id.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringIdentifier of the schedule.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Request Body

Content-Type: application/json
NameTypeDescriptionRequiredDefault
descriptionstringDescription of the schedule.No
displayNamestringName of the schedule.No
enabledbooleanEnabled status of the schedule.Yes
idstringIdentifier of the schedule.Yes
propertiesobjectSchedule’s properties. Unlike tags, properties are add-only. Once added, a property cannot be removed.No
provisioningStatusobjectSchedule provisioning status.No
systemDataobjectSystem metadata for the resource.Yes
tagsobjectSchedule’s tags. Unlike properties, tags are fully mutable.No
taskobjectSchedule task model.Yes
└─ configurationobjectConfiguration for the task.No
└─ typeScheduleTaskTypeType of the task.No
triggerobjectBase model for Trigger of the schedule.Yes
└─ typeTriggerTypeType of the trigger.No

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonSchedule
Status Code: 201 Description: The request has succeeded and a new resource has been created as a result.
Content-TypeTypeDescription
application/jsonSchedule
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Schedules - list runs

GET {endpoint}/schedules/{id}/runs?api-version=2025-11-15-preview
List all schedule runs.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
idpathYesstringIdentifier of the schedule.
x-ms-client-request-idheaderNoAn opaque, globally-unique, client-generated string identifier for the request.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonPagedScheduleRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Schedules - get run

GET {endpoint}/schedules/{scheduleId}/runs/{runId}?api-version=2025-11-15-preview
Get a schedule run by id.

URI Parameters

NameInRequiredTypeDescription
endpointpathYesstring
url
Foundry Project endpoint in the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/{project-name}. If you only have one Project in your Foundry Hub, or to target the default Project in your Hub, use the form https://{ai-services-account-name}.services.ai.azure.com/api/projects/_project
api-versionqueryYesstringThe API version to use for this operation.
scheduleIdpathYesstringIdentifier of the schedule.
runIdpathYesstringIdentifier of the schedule run.

Request Header

NameRequiredTypeDescription
AuthorizationTruestringExample: Authorization: Bearer {Azure_AI_Foundry_Project_Auth_Token}

To generate an auth token using Azure CLI: az account get-access-token --resource https://ai.azure.com/

Type: oauth2
Authorization Url: https://login.microsoftonline.com/common/oauth2/v2.0/authorize
scope: https://ai.azure.com/.default

Responses

Status Code: 200 Description: The request has succeeded.
Content-TypeTypeDescription
application/jsonScheduleRun
Status Code: default Description: An unexpected error response.
Content-TypeTypeDescription
application/jsonAzure.Core.Foundations.ErrorResponse

Components

A2ATool

An agent implementing the A2A protocol.
NameTypeDescriptionRequiredDefault
agent_card_pathstringThe path to the agent card relative to the base_url.
If not provided, defaults to /.well-known/agent-card.json
No
base_urlstringBase URL of the agent.No
project_connection_idstringThe connection ID in the project for the A2A server.
The connection stores authentication and other connection details needed to connect to the A2A server.
No
typeenumThe type of the tool. Always a2a.
Possible values: a2a_preview
Yes

AISearchIndexResource

A AI Search Index resource.
NameTypeDescriptionRequiredDefault
filterstringfilter string for search resource. Learn more here.No
index_asset_idstringIndex asset id for search resource.No
index_namestringThe name of an index in an IndexResource attached to this agent.No
project_connection_idstringAn index connection ID in an IndexResource attached to this agent.No
query_typeobjectAvailable query types for Azure AI Search tool.No
top_kintegerNumber of documents to retrieve from search and present to the model.No

AgentClusterInsightResult

Insights from the agent cluster analysis.
NameTypeDescriptionRequiredDefault
clusterInsightClusterInsightResultInsights from the cluster analysis.Yes
typeenumThe type of insights result.
Possible values: AgentClusterInsight
Yes

AgentClusterInsightsRequest

Insights on set of Agent Evaluation Results
NameTypeDescriptionRequiredDefault
agentNamestringIdentifier for the agent.Yes
modelConfigurationobjectConfiguration of the model used in the insight generation.No
└─ modelDeploymentNamestringThe model deployment to be evaluated. Accepts either the deployment name alone or with the connection name as {connectionName}/<modelDeploymentName>.No
typeenumThe type of request.
Possible values: AgentClusterInsight
Yes

AgentContainerObject

The details of the container of a specific version of an agent.
NameTypeDescriptionRequiredDefault
created_atstringThe creation time of the container.Yes
error_messagestringThe error message if the container failed to operate, if any.No
max_replicasintegerThe maximum number of replicas for the container. Default is 1.No
min_replicasintegerThe minimum number of replicas for the container. Default is 1.No
objectenumThe object type, which is always ‘agent.container’.
Possible values: agent.container
Yes
statusobjectStatus of the container of a specific version of an agent.Yes
updated_atstringThe last update time of the container.Yes

AgentContainerOperationError

The error details of the container operation, if any.
NameTypeDescriptionRequiredDefault
codestringThe error code of the container operation, if any.Yes
messagestringThe error message of the container operation, if any.Yes
typestringThe error type of the container operation, if any.Yes

AgentContainerOperationObject

The container operation for a specific version of an agent.
NameTypeDescriptionRequiredDefault
agent_idstringThe ID of the agent.Yes
agent_version_idstringThe ID of the agent version.Yes
containerobjectThe details of the container of a specific version of an agent.No
└─ created_atstringThe creation time of the container.No
└─ error_messagestringThe error message if the container failed to operate, if any.No
└─ max_replicasintegerThe maximum number of replicas for the container. Default is 1.No
└─ min_replicasintegerThe minimum number of replicas for the container. Default is 1.No
└─ objectenumThe object type, which is always ‘agent.container’.
Possible values: agent.container
No
└─ statusAgentContainerStatusThe status of the container of a specific version of an agent.No
└─ updated_atstringThe last update time of the container.No
errorobjectThe error details of the container operation, if any.No
└─ codestringThe error code of the container operation, if any.No
└─ messagestringThe error message of the container operation, if any.No
└─ typestringThe error type of the container operation, if any.No
idstringThe ID of the container operation. This id is unique identifier across the system.Yes
statusobjectStatus of the container operation for a specific version of an agent.Yes

AgentContainerOperationStatus

Status of the container operation for a specific version of an agent.
PropertyValue
DescriptionStatus of the container operation for a specific version of an agent.
Typestring
ValuesNotStarted
InProgress
Succeeded
Failed

AgentContainerStatus

Status of the container of a specific version of an agent.
PropertyValue
DescriptionStatus of the container of a specific version of an agent.
Typestring
ValuesStarting
Running
Stopping
Stopped
Failed
Deleting
Deleted
Updating

AgentDefinition

Discriminator for AgentDefinition

This component uses the property kind to discriminate between different types:
NameTypeDescriptionRequiredDefault
kindAgentKindYes
rai_configobjectConfiguration for Responsible AI (RAI) content filtering and safety features.No
└─ rai_policy_namestringThe name of the RAI policy to apply.No

AgentId

NameTypeDescriptionRequiredDefault
namestringThe name of the agent.Yes
typeenum
Possible values: agent_id
Yes
versionstringThe version identifier of the agent.Yes

AgentKind

PropertyValue
Typestring
Valuesprompt
hosted
container_app
workflow

AgentObject

NameTypeDescriptionRequiredDefault
idstringThe unique identifier of the agent.Yes
namestringThe name of the agent.Yes
objectenumThe object type, which is always ‘agent’.
Possible values: agent
Yes
versionsobjectThe latest version of the agent.Yes
└─ latestAgentVersionObjectNo

AgentProtocol

PropertyValue
Typestring
Valuesactivity_protocol
responses

AgentReference

NameTypeDescriptionRequiredDefault
namestringThe name of the agent.Yes
typeenum
Possible values: agent_reference
Yes
versionstringThe version identifier of the agent.No

AgentTaxonomyInput

Input configuration for the evaluation taxonomy when the input type is agent.
NameTypeDescriptionRequiredDefault
riskCategoriesarrayList of risk categories to evaluate against.Yes
targetobjectRepresents a target specifying an Azure AI agent.Yes
└─ namestringThe unique identifier of the Azure AI agent.No
└─ tool_descriptionsarrayThe parameters used to control the sampling behavior of the agent during text generation.No
└─ typeenumThe type of target, always azure_ai_agent.
Possible values: azure_ai_agent
No
└─ versionstringThe version of the Azure AI agent.No
typeenumInput type of the evaluation taxonomy.
Possible values: agent
Yes

AgentTaxonomyInputUpdate

Input configuration for the evaluation taxonomy when the input type is agent.
NameTypeDescriptionRequiredDefault
riskCategoriesarrayList of risk categories to evaluate against.No
targetobjectRepresents a target specifying an Azure AI agent.No
└─ namestringThe unique identifier of the Azure AI agent.No
└─ tool_descriptionsarrayThe parameters used to control the sampling behavior of the agent during text generation.No
└─ typeenumThe type of target, always azure_ai_agent.
Possible values: azure_ai_agent
No
└─ versionstringThe version of the Azure AI agent.No
typeenumInput type of the evaluation taxonomy.
Possible values: agent
No

AgentVersionObject

NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (seconds) when the agent was created.Yes
definitionAgentDefinitionYes
descriptionstringA human-readable description of the agent.No
idstringThe unique identifier of the agent version.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
namestringThe name of the agent. Name can be used to retrieve/update/delete the agent.Yes
objectenumThe object type, which is always ‘agent.version’.
Possible values: agent.version
Yes
versionstringThe version identifier of the agent. Agents are immutable and every update creates a new version while keeping the name same.Yes

AgenticIdentityCredentials

Agentic identity credential definition
NameTypeDescriptionRequiredDefault
typeenumThe credential type
Possible values: AgenticIdentityToken
Yes

ApiErrorResponse

Error response for API failures.
NameTypeDescriptionRequiredDefault
errorOpenAI.ErrorYes

ApiKeyCredentials

API Key Credential definition
NameTypeDescriptionRequiredDefault
keystringAPI KeyNo
typeenumThe credential type
Possible values: ApiKey
Yes

AssetCredentialResponse

Represents a reference to a blob for consumption
NameTypeDescriptionRequiredDefault
blobReferenceobjectBlob reference details.Yes
└─ blobUristringBlob URI path for client to upload data. Example: https://blob.windows.core.net/Container/PathNo
└─ credentialSasCredentialCredential info to access the storage account.No
└─ storageAccountArmIdstringARM ID of the storage account to use.No

AssetId

Identifier of a saved asset. Type: string

AttackStrategy

Strategies for attacks.
PropertyValue
DescriptionStrategies for attacks.
Typestring
Valueseasy
moderate
difficult
ascii_art
ascii_smuggler
atbash
base64
binary
caesar
character_space
jailbreak
ansii_attack
character_swap
suffix_append
string_join
unicode_confusable
unicode_substitution
diacritic
flip
leetspeak
rot13
morse
url
baseline
indirect_jailbreak
tense
multi_turn
crescendo

Azure.Core.Foundations.Error

The error object.
NameTypeDescriptionRequiredDefault
codestringOne of a server-defined set of error codes.Yes
detailsarrayAn array of details about specific errors that led to this reported error.No
innererrorobjectAn object containing more specific information about the error. As per Azure REST API guidelines - https://aka.ms/AzureRestApiGuidelines#handling-errors.No
└─ codestringOne of a server-defined set of error codes.No
└─ innererrorAzure.Core.Foundations.InnerErrorInner error.No
messagestringA human-readable representation of the error.Yes
targetstringThe target of the error.No

Azure.Core.Foundations.ErrorResponse

A response containing error details.
NameTypeDescriptionRequiredDefault
errorobjectThe error object.Yes
└─ codestringOne of a server-defined set of error codes.No
└─ detailsarrayAn array of details about specific errors that led to this reported error.No
└─ innererrorAzure.Core.Foundations.InnerErrorAn object containing more specific information than the current object about the error.No
└─ messagestringA human-readable representation of the error.No
└─ targetstringThe target of the error.No

Azure.Core.Foundations.InnerError

An object containing more specific information about the error. As per Azure REST API guidelines - https://aka.ms/AzureRestApiGuidelines#handling-errors.
NameTypeDescriptionRequiredDefault
codestringOne of a server-defined set of error codes.No
innererrorobjectAn object containing more specific information about the error. As per Azure REST API guidelines - https://aka.ms/AzureRestApiGuidelines#handling-errors.No
└─ codestringOne of a server-defined set of error codes.No
└─ innererrorAzure.Core.Foundations.InnerErrorInner error.No

Azure.Core.Foundations.OperationState

Enum describing allowed operation states.
PropertyValue
Typestring
ValuesNotStarted
Running
Succeeded
Failed
Canceled

Azure.Core.uuid

Universally Unique Identifier Type: string Format: uuid

AzureAIAgentTarget

Represents a target specifying an Azure AI agent.
NameTypeDescriptionRequiredDefault
namestringThe unique identifier of the Azure AI agent.Yes
tool_descriptionsarrayThe parameters used to control the sampling behavior of the agent during text generation.No
typeenumThe type of target, always azure_ai_agent.
Possible values: azure_ai_agent
Yes
versionstringThe version of the Azure AI agent.No

AzureAIAgentTargetUpdate

Represents a target specifying an Azure AI agent.
NameTypeDescriptionRequiredDefault
namestringThe unique identifier of the Azure AI agent.No
tool_descriptionsarrayThe parameters used to control the sampling behavior of the agent during text generation.No
typeenumThe type of target, always azure_ai_agent.
Possible values: azure_ai_agent
No
versionstringThe version of the Azure AI agent.No

AzureAIAssistantTarget

Represents a target specifying an Azure AI Assistant (Agent V1) endpoint, including its id.
NameTypeDescriptionRequiredDefault
idstringThe unique identifier of the Azure AI Assistant.No
tool_descriptionsarrayThe descriptions of tools available to the assistant.Yes
typeenumThe type of target, always azure_ai_assistant.
Possible values: azure_ai_assistant
Yes

AzureAIAssistantTargetUpdate

Represents a target specifying an Azure AI Assistant (Agent V1) endpoint, including its id.
NameTypeDescriptionRequiredDefault
idstringThe unique identifier of the Azure AI Assistant.No
tool_descriptionsarrayThe descriptions of tools available to the assistant.No
typeenumThe type of target, always azure_ai_assistant.
Possible values: azure_ai_assistant
No

AzureAIEvaluator

Azure AI Evaluator definition for foundry evaluators.
NameTypeDescriptionRequiredDefault
data_mappingobjectThe model to use for the evaluation. Must support structured outputs.No
evaluator_namestringThe name of the evaluator.Yes
evaluator_versionstringThe version of the evaluator.No
initialization_parametersobjectThe initialization parameters for the evaluation. Must support structured outputs.No
namestringThe name of the grader.Yes
typeenumThe object type, which is always label_model.
Possible values: azure_ai_evaluator
Yes

AzureAIModelTarget

Represents a target specifying an Azure AI model for operations requiring model selection.
NameTypeDescriptionRequiredDefault
modelstringThe unique identifier of the Azure AI model.No
sampling_paramsobjectRepresents a set of parameters used to control the sampling behavior of a language model during text generation.No
└─ max_completion_tokensintegerThe maximum number of tokens allowed in the completion.No
└─ seedintegerThe random seed for reproducibility.No
└─ temperaturenumberThe temperature parameter for sampling.No
└─ top_pnumberThe top-p parameter for nucleus sampling.No
typeenumThe type of target, always azure_ai_model.
Possible values: azure_ai_model
Yes

AzureAIModelTargetUpdate

Represents a target specifying an Azure AI model for operations requiring model selection.
NameTypeDescriptionRequiredDefault
modelstringThe unique identifier of the Azure AI model.No
sampling_paramsobjectRepresents a set of parameters used to control the sampling behavior of a language model during text generation.No
└─ max_completion_tokensintegerThe maximum number of tokens allowed in the completion.No
└─ seedintegerThe random seed for reproducibility.No
└─ temperaturenumberThe temperature parameter for sampling.No
└─ top_pnumberThe top-p parameter for nucleus sampling.No
typeenumThe type of target, always azure_ai_model.
Possible values: azure_ai_model
No

AzureAIRedTeam

NameTypeDescriptionRequiredDefault
item_generation_paramsobjectRepresents the parameters for red team item generation.Yes
└─ attack_strategiesarrayThe collection of attack strategies to be used.No
└─ num_turnsintegerThe number of turns allowed in the game.No
└─ typeenumThe type of item generation parameters, always red_team.
Possible values: red_team
No
targetobjectBase class for targets with discriminator support.Yes
└─ typestringThe type of target.No
typeenumThe type of data source. Always azure_ai_red_team.
Possible values: azure_ai_red_team
Yes

AzureAIResponses

Represents a data source for evaluation runs that are specific to Continuous Evaluation scenarios.
NameTypeDescriptionRequiredDefault
event_configuration_idstringThe event configuration name associated with this evaluation run.Yes
item_generation_paramsobjectRepresents the parameters for continuous evaluation item generation.Yes
└─ data_mappingobjectMapping from source fields to response_id field, required for retrieving chat history.No
└─ max_num_turnsintegerThe maximum number of turns of chat history to evaluate.No
└─ sourceOpenAI.EvalJsonlFileContentSource or OpenAI.EvalJsonlFileIdSourceThe source from which JSONL content is read.No
└─ typeenumThe type of item generation parameters, always ResponseRetrieval.
Possible values: response_retrieval
No
max_runs_hourlyintegerMaximum number of evaluation runs allowed per hour.Yes
typeenumThe type of data source, always AzureAIResponses.
Possible values: azure_ai_responses
Yes

AzureAISearchAgentTool

The input definition information for an Azure AI search tool as used to configure an agent.
NameTypeDescriptionRequiredDefault
azure_ai_searchobjectA set of index resources used by the azure_ai_search tool.Yes
└─ indexesarrayThe indices attached to this agent. There can be a maximum of 1 index
resource attached to the agent.
No
typeenumThe object type, which is always ‘azure_ai_search’.
Possible values: azure_ai_search
Yes

AzureAISearchIndex

Azure AI Search Index Definition
NameTypeDescriptionRequiredDefault
typeenumType of index
Possible values: AzureSearch
Yes

AzureAISearchIndexUpdate

Azure AI Search Index Definition
NameTypeDescriptionRequiredDefault
typeenumType of index
Possible values: AzureSearch
Yes

AzureAISearchQueryType

Available query types for Azure AI Search tool.
PropertyValue
DescriptionAvailable query types for Azure AI Search tool.
Typestring
Valuessimple
semantic
vector
vector_simple_hybrid
vector_semantic_hybrid

AzureAISearchToolResource

A set of index resources used by the azure_ai_search tool.
NameTypeDescriptionRequiredDefault
indexesarrayThe indices attached to this agent. There can be a maximum of 1 index
resource attached to the agent.
Yes

AzureAISource

NameTypeDescriptionRequiredDefault
scenarioenumData schema scenario.
Possible values: red_team, responses, traces
Yes
typeenumThe object type, which is always label_model.
Possible values: azure_ai_source
Yes

AzureFunctionAgentTool

The input definition information for an Azure Function Tool, as used to configure an Agent.
NameTypeDescriptionRequiredDefault
azure_functionobjectThe definition of Azure function.Yes
└─ functionobjectThe definition of azure function and its parameters.No
└─ descriptionstringA description of what the function does, used by the model to choose when and how to call the function.No
└─ namestringThe name of the function to be called.No
└─ parametersThe parameters the functions accepts, described as a JSON Schema object.No
└─ input_bindingAzureFunctionBindingInput storage queue. The queue storage trigger runs a function as messages are added to it.No
└─ output_bindingAzureFunctionBindingOutput storage queue. The function writes output to this queue when the input items are processed.No
typeenumThe object type, which is always ‘browser_automation’.
Possible values: azure_function
Yes

AzureFunctionBinding

The structure for keeping storage queue name and URI.
NameTypeDescriptionRequiredDefault
storage_queueobjectThe structure for keeping storage queue name and URI.Yes
└─ queue_namestringThe name of an Azure function storage queue.No
└─ queue_service_endpointstringURI to the Azure Storage Queue service allowing you to manipulate a queue.No
typeenumThe type of binding, which is always ‘storage_queue’.
Possible values: storage_queue
Yes

AzureFunctionDefinition

The definition of Azure function.
NameTypeDescriptionRequiredDefault
functionobjectThe definition of azure function and its parameters.Yes
└─ descriptionstringA description of what the function does, used by the model to choose when and how to call the function.No
└─ namestringThe name of the function to be called.No
└─ parametersThe parameters the functions accepts, described as a JSON Schema object.No
input_bindingobjectThe structure for keeping storage queue name and URI.Yes
└─ storage_queueAzureFunctionStorageQueueStorage queue.No
└─ typeenumThe type of binding, which is always ‘storage_queue’.
Possible values: storage_queue
No
output_bindingobjectThe structure for keeping storage queue name and URI.Yes
└─ storage_queueAzureFunctionStorageQueueStorage queue.No
└─ typeenumThe type of binding, which is always ‘storage_queue’.
Possible values: storage_queue
No

AzureFunctionStorageQueue

The structure for keeping storage queue name and URI.
NameTypeDescriptionRequiredDefault
queue_namestringThe name of an Azure function storage queue.Yes
queue_service_endpointstringURI to the Azure Storage Queue service allowing you to manipulate a queue.Yes

AzureOpenAIModelConfiguration

Azure OpenAI model configuration. The API version would be selected by the service for querying the model.
NameTypeDescriptionRequiredDefault
modelDeploymentNamestringDeployment name for AOAI model. Example: gpt-4o if in AIServices or connection based connection_name/deployment_name (e.g. my-aoai-connection/gpt-4o).Yes
typeenum
Possible values: AzureOpenAIModel
Yes

BaseCredentials

A base class for connection credentials

Discriminator for BaseCredentials

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeobjectThe credential type used by the connectionYes

BingCustomSearchAgentTool

The input definition information for a Bing custom search tool as used to configure an agent.
NameTypeDescriptionRequiredDefault
bing_custom_search_previewobjectThe bing custom search tool parameters.Yes
└─ search_configurationsarrayThe project connections attached to this tool. There can be a maximum of 1 connection
resource attached to the tool.
No
typeenumThe object type, which is always ‘bing_custom_search’.
Possible values: bing_custom_search_preview
Yes

BingCustomSearchConfiguration

A bing custom search configuration.
NameTypeDescriptionRequiredDefault
countintegerThe number of search results to return in the bing api responseNo
freshnessstringFilter search results by a specific time range. See accepted values here.No
instance_namestringName of the custom configuration instance given to config.Yes
marketstringThe market where the results come from.No
project_connection_idstringProject connection id for grounding with bing searchYes
set_langstringThe language to use for user interface strings when calling Bing API.No

BingCustomSearchToolParameters

The bing custom search tool parameters.
NameTypeDescriptionRequiredDefault
search_configurationsarrayThe project connections attached to this tool. There can be a maximum of 1 connection
resource attached to the tool.
Yes

BingGroundingAgentTool

The input definition information for a bing grounding search tool as used to configure an agent.
NameTypeDescriptionRequiredDefault
bing_groundingobjectThe bing grounding search tool parameters.Yes
└─ search_configurationsarrayThe search configurations attached to this tool. There can be a maximum of 1
search configuration resource attached to the tool.
No
typeenumThe object type, which is always ‘bing_grounding’.
Possible values: bing_grounding
Yes

BingGroundingSearchConfiguration

Search configuration for Bing Grounding
NameTypeDescriptionRequiredDefault
countintegerThe number of search results to return in the bing api responseNo
freshnessstringFilter search results by a specific time range. See accepted values here.No
marketstringThe market where the results come from.No
project_connection_idstringProject connection id for grounding with bing searchYes
set_langstringThe language to use for user interface strings when calling Bing API.No

BingGroundingSearchToolParameters

The bing grounding search tool parameters.
NameTypeDescriptionRequiredDefault
search_configurationsarrayThe search configurations attached to this tool. There can be a maximum of 1
search configuration resource attached to the tool.
Yes

BlobReference

Blob reference details.
NameTypeDescriptionRequiredDefault
blobUristringBlob URI path for client to upload data. Example: https://blob.windows.core.net/Container/PathYes
credentialobjectSAS Credential definitionYes
└─ sasUristringSAS uriNo
└─ typeenumType of credential
Possible values: SAS
No
storageAccountArmIdstringARM ID of the storage account to use.Yes

BrowserAutomationAgentTool

The input definition information for a Browser Automation Tool, as used to configure an Agent.
NameTypeDescriptionRequiredDefault
browser_automation_previewobjectDefinition of input parameters for the Browser Automation Tool.Yes
└─ connectionBrowserAutomationToolConnectionParametersThe project connection parameters associated with the Browser Automation Tool.No
typeenumThe object type, which is always ‘browser_automation’.
Possible values: browser_automation_preview
Yes

BrowserAutomationToolConnectionParameters

Definition of input parameters for the connection used by the Browser Automation Tool.
NameTypeDescriptionRequiredDefault
project_connection_idstringThe ID of the project connection to your Azure Playwright resource.Yes

BrowserAutomationToolParameters

Definition of input parameters for the Browser Automation Tool.
NameTypeDescriptionRequiredDefault
connectionobjectDefinition of input parameters for the connection used by the Browser Automation Tool.Yes
└─ project_connection_idstringThe ID of the project connection to your Azure Playwright resource.No

CaptureStructuredOutputsTool

A tool for capturing structured outputs
NameTypeDescriptionRequiredDefault
outputsobjectA structured output that can be produced by the agent.Yes
└─ descriptionstringA description of the output to emit. Used by the model to determine when to emit the output.No
└─ namestringThe name of the structured output.No
└─ schemaThe JSON schema for the structured output.No
└─ strictbooleanWhether to enforce strict validation. Default true.No
typeenumThe type of the tool. Always capture_structured_outputs.
Possible values: capture_structured_outputs
Yes

ChartCoordinate

Coordinates for the analysis chart.
NameTypeDescriptionRequiredDefault
sizeintegerSize of the chart element.Yes
xintegerX-axis coordinate.Yes
yintegerY-axis coordinate.Yes

ChatSummaryMemoryItem

A memory item containing a summary extracted from conversations.
NameTypeDescriptionRequiredDefault
kindenumThe kind of the memory item.
Possible values: chat_summary
Yes

ClusterInsightResult

Insights from the cluster analysis.
NameTypeDescriptionRequiredDefault
clustersarrayList of clusters identified in the insights.Yes
coordinatesobjectOptional mapping of IDs to 2D coordinates used by the UX for visualization.

The map keys are string identifiers (for example, a cluster id or a sample id)
and the values are the coordinates and visual size for rendering on a 2D chart.

This property is omitted unless the client requests coordinates (for example,
by passing includeCoordinates=true as a query parameter).

Example:
<br /> {<br /> "cluster-1": { "x": 12, "y": 34, "size": 8 },<br /> "sample-123": { "x": 18, "y": 22, "size": 4 }<br /> }<br />

Coordinates are intended only for client-side visualization and do not
modify the canonical insights results.
No
summaryobjectSummary of the error cluster analysis.Yes
└─ methodstringMethod used for clustering.No
└─ sampleCountintegerTotal number of samples analyzed.No
└─ uniqueClusterCountintegerTotal number of unique clusters.No
└─ uniqueSubclusterCountintegerTotal number of unique subcluster labels.No
└─ usageClusterTokenUsageToken usage while performing clustering analysisNo

ClusterTokenUsage

Token usage for cluster analysis
NameTypeDescriptionRequiredDefault
inputTokenUsageintegerinput token usageYes
outputTokenUsageintegeroutput token usageYes
totalTokenUsageintegertotal token usageYes

CodeBasedEvaluatorDefinition

Code-based evaluator definition using python code
NameTypeDescriptionRequiredDefault
code_textstringInline code text for the evaluatorYes
typeenum
Possible values: code
Yes

Connection

Response from the list and get connections operations
NameTypeDescriptionRequiredDefault
credentialsobjectA base class for connection credentialsYes
└─ typeCredentialTypeThe type of credential used by the connectionNo
idstringA unique identifier for the connection, generated by the serviceYes
isDefaultbooleanWhether the connection is tagged as the default connection of its typeYes
metadataobjectMetadata of the connectionYes
namestringThe friendly name of the connection, provided by the user.Yes
targetstringThe connection URL to be used for this serviceYes
typeobjectThe Type (or category) of the connectionYes

ConnectionType

The Type (or category) of the connection
PropertyValue
DescriptionThe Type (or category) of the connection
Typestring
ValuesAzureOpenAI
AzureBlob
AzureStorageAccount
CognitiveSearch
CosmosDB
ApiKey
AppConfig
AppInsights
CustomKeys
RemoteTool

ContainerAppAgentDefinition

The container app agent definition.
NameTypeDescriptionRequiredDefault
container_app_resource_idstringThe resource ID of the Azure Container App that hosts this agent. Not mutable across versions.Yes
container_protocol_versionsarrayThe protocols that the agent supports for ingress communication of the containers.Yes
ingress_subdomain_suffixstringThe suffix to apply to the app subdomain when sending ingress to the agent. This can be a label (e.g., ‘---current’), a specific revision (e.g., ‘—0000001’), or empty to use the default endpoint for the container app.Yes
kindenum
Possible values: container_app
Yes

ContinuousEvalItemGenerationParams

Represents the parameters for continuous evaluation item generation.
NameTypeDescriptionRequiredDefault
data_mappingobjectMapping from source fields to response_id field, required for retrieving chat history.Yes
max_num_turnsintegerThe maximum number of turns of chat history to evaluate.Yes
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ idstringThe identifier of the file.No
└─ typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
No
typeenumThe type of item generation parameters, always ResponseRetrieval.
Possible values: response_retrieval
Yes

ContinuousEvaluationRuleAction

Evaluation rule action for continuous evaluation.
NameTypeDescriptionRequiredDefault
evalIdstringEval Id to add continuous evaluation runs to.Yes
maxHourlyRunsintegerMaximum number of evaluation runs allowed per hour.No
typeenum
Possible values: continuousEvaluation
Yes

CosmosDBIndex

CosmosDB Vector Store Index Definition
NameTypeDescriptionRequiredDefault
typeenumType of index
Possible values: CosmosDBNoSqlVectorStore
Yes

CosmosDBIndexUpdate

CosmosDB Vector Store Index Definition
NameTypeDescriptionRequiredDefault
typeenumType of index
Possible values: CosmosDBNoSqlVectorStore
Yes

CreateAgentFromManifestRequest

NameTypeDescriptionRequiredDefault
descriptionstringA human-readable description of the agent.No
manifest_idstringThe manifest ID to import the agent version from.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe unique name that identifies the agent. Name can be used to retrieve/update/delete the agent.
- Must start and end with alphanumeric characters,
- Can contain hyphens in the middle
- Must not exceed 63 characters.
Yes
parameter_valuesobjectThe inputs to the manifest that will result in a fully materialized Agent.Yes

CreateAgentRequest

NameTypeDescriptionRequiredDefault
definitionobjectYes
└─ kindAgentKindNo
└─ rai_configRaiConfigConfiguration for Responsible AI (RAI) content filtering and safety features.No
descriptionstringA human-readable description of the agent.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe unique name that identifies the agent. Name can be used to retrieve/update/delete the agent.
- Must start and end with alphanumeric characters,
- Can contain hyphens in the middle
- Must not exceed 63 characters.
Yes

CreateAgentVersionFromManifestRequest

NameTypeDescriptionRequiredDefault
descriptionstringA human-readable description of the agent.No
manifest_idstringThe manifest ID to import the agent version from.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
parameter_valuesobjectThe inputs to the manifest that will result in a fully materialized Agent.Yes

CreateAgentVersionRequest

NameTypeDescriptionRequiredDefault
definitionobjectYes
└─ kindAgentKindNo
└─ rai_configRaiConfigConfiguration for Responsible AI (RAI) content filtering and safety features.No
descriptionstringA human-readable description of the agent.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

CreateEvalRequest

NameTypeDescriptionRequiredDefault
data_source_configobjectA CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs.
This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
Yes
└─ include_sample_schemabooleanWhether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source)No
└─ item_schemaobjectThe json schema for each row in the data source.No
└─ metadataobjectMetadata filters for the stored completions data source.No
└─ scenarioenumData schema scenario.
Possible values: red_team, responses, traces
No
└─ typeenumThe object type, which is always label_model.
Possible values: azure_ai_source
No
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the evaluation.No
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No
testing_criteriaarrayA list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model’s output, use the sample namespace (ie, {{sample.output_text}}).Yes

CreateEvalRunRequest

NameTypeDescriptionRequiredDefault
data_sourceobjectA JsonlRunDataSource object with that specifies a JSONL file that matches the evalYes
└─ input_messagesOpenAI.CreateEvalResponsesRunDataSourceInputMessagesTemplate or OpenAI.CreateEvalResponsesRunDataSourceInputMessagesItemReferenceUsed when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.No
└─ item_generation_paramsRedTeamItemGenerationParamsThe parameters for item generation.No
└─ modelstringThe name of the model to use for generating completions (e.g. “o3-mini”).No
└─ sampling_paramsOpenAI.CreateEvalResponsesRunDataSourceSamplingParamsNo
└─ sourceOpenAI.EvalJsonlFileContentSource or OpenAI.EvalJsonlFileIdSource or OpenAI.EvalResponsesSourceDetermines what populates the item namespace in this run’s data source.No
└─ targetTargetThe target configuration for the evaluation.No
└─ typestringThe data source type discriminator.No
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringThe name of the run.No
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No

CreatedBy

NameTypeDescriptionRequiredDefault
agentobjectNo
└─ namestringThe name of the agent.No
└─ typeenum
Possible values: agent_id
No
└─ versionstringThe version identifier of the agent.No
response_idstringThe response on which the item is created.No

CredentialType

The credential type used by the connection
PropertyValue
DescriptionThe credential type used by the connection
Typestring
ValuesApiKey
AAD
SAS
CustomKeys
None
AgenticIdentityToken

CronTrigger

Cron based trigger.
NameTypeDescriptionRequiredDefault
endTimestringEnd time for the cron schedule in ISO 8601 format.No
expressionstringCron expression that defines the schedule frequency.Yes
startTimestringStart time for the cron schedule in ISO 8601 format.No
timeZonestringTime zone for the cron schedule.NoUTC
typeenum
Possible values: Cron
Yes

CustomCredential

Custom credential definition
NameTypeDescriptionRequiredDefault
typeenumThe credential type
Possible values: CustomKeys
Yes

DailyRecurrenceSchedule

Daily recurrence schedule.
NameTypeDescriptionRequiredDefault
hoursarrayHours for the recurrence schedule.Yes
typeenumDaily recurrence type.
Possible values: Daily
Yes

DatasetType

Enum to determine the type of data.
PropertyValue
DescriptionEnum to determine the type of data.
Typestring
Valuesuri_file
uri_folder

DatasetVersion

DatasetVersion Definition

Discriminator for DatasetVersion

This component uses the property type to discriminate between different types:
Type ValueSchema
uri_fileFileDatasetVersion
uri_folderFolderDatasetVersion
NameTypeDescriptionRequiredDefault
connectionNamestringThe Azure Storage Account connection name. Required if startPendingUploadVersion was not called before creating the DatasetNo
dataUristringURI of the data (example)Yes
idstringAsset ID, a unique identifier for the assetNo
isReferencebooleanIndicates if the dataset holds a reference to the storage, or the dataset manages storage itself. If true, the underlying data will not be deleted when the dataset version is deletedNo
namestringThe name of the resourceYes
typeobjectEnum to determine the type of data.Yes
versionstringThe version of the resourceYes

DatasetVersionUpdate

DatasetVersion Definition

Discriminator for DatasetVersionUpdate

This component uses the property type to discriminate between different types:
Type ValueSchema
uri_fileFileDatasetVersionUpdate
uri_folderFolderDatasetVersionUpdate
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
typeobjectEnum to determine the type of data.Yes

DayOfWeek

Days of the week for recurrence schedule.
PropertyValue
DescriptionDays of the week for recurrence schedule.
Typestring
ValuesSunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday

DeleteAgentResponse

A deleted agent Object
NameTypeDescriptionRequiredDefault
deletedbooleanWhether the agent was successfully deleted.Yes
namestringThe name of the agent.Yes
objectenumThe object type. Always ‘agent.deleted’.
Possible values: agent.deleted
Yes

DeleteAgentVersionResponse

A deleted agent version Object
NameTypeDescriptionRequiredDefault
deletedbooleanWhether the agent was successfully deleted.Yes
namestringThe name of the agent.Yes
objectenumThe object type. Always ‘agent.deleted’.
Possible values: agent.version.deleted
Yes
versionstringThe version identifier of the agent.Yes

DeleteEvalResponse

A deleted evaluation Object
NameTypeDescriptionRequiredDefault
deletedbooleanWhether the eval was successfully deleted.Yes
eval_idstringid of the eval.Yes
objectenumThe object type. Always ‘eval.deleted’.
Possible values: eval.deleted
Yes

DeleteEvalRunResponse

A deleted evaluation run Object.
NameTypeDescriptionRequiredDefault
deletedbooleanWhether the eval was successfully deleted.No
objectenumThe object type. Always ‘eval.deleted’.
Possible values: eval.deleted
No
run_idstringid of the eval.No

DeleteMemoryStoreResponse

NameTypeDescriptionRequiredDefault
deletedbooleanWhether the memory store was successfully deleted.Yes
namestringThe name of the memory store.Yes
objectenumThe object type. Always ‘memory_store.deleted’.
Possible values: memory_store.deleted
Yes

DeleteResponseResult

The result of a delete response operation.
NameTypeDescriptionRequiredDefault
deletedenumAlways return true
Possible values: True
Yes
idstringThe operation ID.Yes
objectenumAlways return ‘response’.
Possible values: response
Yes

Deployment

Model Deployment Definition

Discriminator for Deployment

This component uses the property type to discriminate between different types:
Type ValueSchema
ModelDeploymentModelDeployment
NameTypeDescriptionRequiredDefault
namestringName of the deploymentYes
typeobjectYes

DeploymentType

PropertyValue
Typestring
ValuesModelDeployment

EntraIDCredentials

Entra ID credential definition
NameTypeDescriptionRequiredDefault
typeenumThe credential type
Possible values: AAD
Yes

Eval

An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
  • Improve the quality of my chatbot
  • See how well my chatbot handles customer support
  • Check if o4-mini is better at my usecase than gpt-4o
NameTypeDescriptionRequiredDefault
created_atobjectYes
created_bystringthe name of the person who created the run.No
data_source_configobjectA CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs.
This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
Yes
└─ include_sample_schemabooleanWhether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source)No
└─ item_schemaobjectThe json schema for each row in the data source.No
└─ metadataobjectMetadata filters for the stored completions data source.No
└─ scenarioenumData schema scenario.
Possible values: red_team, responses, traces
No
└─ typeenumThe object type, which is always label_model.
Possible values: azure_ai_source
No
idstringUnique identifier for the evaluation.Yes
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
modified_atobjectNo
namestringThe name of the evaluation.Yes
objectenumThe object type.
Possible values: eval
Yes
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No
testing_criteriaarrayA list of testing criteria.Yes

EvalCompareReport

Insights from the evaluation comparison.
NameTypeDescriptionRequiredDefault
comparisonsarrayComparison results for each treatment run against the baseline.Yes
methodstringThe statistical method used for comparison.Yes
typeenumThe type of insights result.
Possible values: EvaluationComparison
Yes

EvalResult

Result of the evaluation.
NameTypeDescriptionRequiredDefault
namestringname of the checkYes
passedbooleanindicates if the check passed or failedYes
scorenumberscoreYes
typestringtype of the checkYes

EvalRun

A schema representing an evaluation run.
NameTypeDescriptionRequiredDefault
created_atobjectYes
created_bystringthe name of the person who created the run.No
data_sourceobjectA JsonlRunDataSource object with that specifies a JSONL file that matches the evalYes
└─ input_messagesOpenAI.CreateEvalResponsesRunDataSourceInputMessagesTemplate or OpenAI.CreateEvalResponsesRunDataSourceInputMessagesItemReferenceUsed when sampling from a model. Dictates the structure of the messages passed into the model. Can either be a reference to a prebuilt trajectory (ie, item.input_trajectory), or a template with variable references to the item namespace.No
└─ item_generation_paramsRedTeamItemGenerationParamsThe parameters for item generation.No
└─ modelstringThe name of the model to use for generating completions (e.g. “o3-mini”).No
└─ sampling_paramsOpenAI.CreateEvalResponsesRunDataSourceSamplingParamsNo
└─ sourceOpenAI.EvalJsonlFileContentSource or OpenAI.EvalJsonlFileIdSource or OpenAI.EvalResponsesSourceDetermines what populates the item namespace in this run’s data source.No
└─ targetTargetThe target configuration for the evaluation.No
└─ typestringThe data source type discriminator.No
errorOpenAI.EvalApiErrorAn object representing an error response from the Eval API.Yes
eval_idstringThe identifier of the associated evaluation.Yes
idstringUnique identifier for the evaluation run.Yes
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
modelstringThe model that is evaluated, if applicable.Yes
modified_atobjectNo
namestringThe name of the evaluation run.Yes
objectenumThe type of the object. Always “eval.run”.
Possible values: eval.run
Yes
per_model_usagearrayUsage statistics for each model during the evaluation run.Yes
per_testing_criteria_resultsarrayResults per testing criteria applied during the evaluation run.Yes
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No
report_urlstringThe URL to the rendered evaluation run report on the UI dashboard.Yes
result_countsobjectYes
└─ erroredOpenAI.integerNo
└─ failedOpenAI.integerNo
└─ passedOpenAI.integerNo
└─ totalOpenAI.integerNo
statusstringThe status of the evaluation run.Yes

EvalRunDataSource

Base class for run data sources with discriminator support.

Discriminator for EvalRunDataSource

This component uses the property type to discriminate between different types:
Type ValueSchema
azure_ai_tracesTracesEvalRunDataSource
azure_ai_responsesAzureAIResponses
azure_ai_target_completionsTargetCompletions
NameTypeDescriptionRequiredDefault
typestringThe data source type discriminator.Yes

EvalRunOutputItem

A schema representing an evaluation run output item.
NameTypeDescriptionRequiredDefault
created_atobjectYes
datasource_itemobjectDetails of the input data source item.Yes
datasource_item_idobjectYes
eval_idstringThe identifier of the evaluation group.Yes
idstringUnique identifier for the evaluation run output item.Yes
objectenumThe type of the object. Always “eval.run.output_item”.
Possible values: eval.run.output_item
Yes
resultsarrayA list of grader results for this output item.Yes
run_idstringThe identifier of the evaluation run associated with this output item.Yes
sampleobjectYes
└─ errorOpenAI.EvalApiErrorAn object representing an error response from the Eval API.No
└─ finish_reasonstringNo
└─ inputarrayNo
└─ max_completion_tokensOpenAI.integerNo
└─ modelstringNo
└─ outputarrayNo
└─ seedOpenAI.integerNo
└─ temperatureOpenAI.numericNo
└─ top_pOpenAI.numericNo
└─ usageOpenAI.EvalRunOutputItemSampleUsageNo
statusstringThe status of the evaluation run.Yes

EvalRunOutputItemResult

A single grader result for an evaluation run output item.
NameTypeDescriptionRequiredDefault
labelstringThe label associated with the test criteria metric (e.g., “pass”, “fail”, “good”, “bad”).No
metricstringThe name of the metric (e.g., “fluency”, “f1_score”).No
namestringThe name of the grader.Yes
passedbooleanWhether the grader considered the output a pass.Yes
propertiesobjectAdditional details about the test criteria metric.No
reasonstringThe reason for the test criteria metric.No
sampleobjectOptional sample or intermediate data produced by the grader.No
scoreobjectYes
thresholdnumberThe threshold used to determine pass/fail for this test criteria, if it is numerical.No
typestringThe grader type (for example, “string-check-grader”).No

EvalRunResultCompareItem

Metric comparison for a treatment against the baseline.
NameTypeDescriptionRequiredDefault
deltaEstimatenumberEstimated difference between treatment and baseline.Yes
pValuenumberP-value for the treatment effect.Yes
treatmentEffectobjectTreatment Effect Type.Yes
treatmentRunIdstringThe treatment run ID.Yes
treatmentRunSummaryobjectSummary statistics of a metric in an evaluation run.Yes
└─ averagenumberAverage value of the metric in the evaluation run.No
└─ runIdstringThe evaluation run ID.No
└─ sampleCountintegerNumber of samples in the evaluation run.No
└─ standardDeviationnumberStandard deviation of the metric in the evaluation run.No

EvalRunResultComparison

Comparison results for treatment runs against the baseline.
NameTypeDescriptionRequiredDefault
baselineRunSummaryobjectSummary statistics of a metric in an evaluation run.Yes
└─ averagenumberAverage value of the metric in the evaluation run.No
└─ runIdstringThe evaluation run ID.No
└─ sampleCountintegerNumber of samples in the evaluation run.No
└─ standardDeviationnumberStandard deviation of the metric in the evaluation run.No
compareItemsarrayList of comparison results for each treatment run.Yes
evaluatorstringName of the evaluator for this testing criteria.Yes
metricstringMetric being evaluated.Yes
testingCriteriastringName of the testing criteria.Yes

EvalRunResultSummary

Summary statistics of a metric in an evaluation run.
NameTypeDescriptionRequiredDefault
averagenumberAverage value of the metric in the evaluation run.Yes
runIdstringThe evaluation run ID.Yes
sampleCountintegerNumber of samples in the evaluation run.Yes
standardDeviationnumberStandard deviation of the metric in the evaluation run.Yes

EvaluationComparisonRequest

Evaluation Comparison Request
NameTypeDescriptionRequiredDefault
baselineRunIdstringThe baseline run ID for comparison.Yes
evalIdstringIdentifier for the evaluation.Yes
treatmentRunIdsarrayList of treatment run IDs for comparison.Yes
typeenumThe type of request.
Possible values: EvaluationComparison
Yes

EvaluationResultSample

A sample from the evaluation result.
NameTypeDescriptionRequiredDefault
evaluationResultobjectResult of the evaluation.Yes
└─ namestringname of the checkNo
└─ passedbooleanindicates if the check passed or failedNo
└─ scorenumberscoreNo
└─ typestringtype of the checkNo
typeenumEvaluation Result Sample Type
Possible values: EvaluationResultSample
Yes

EvaluationRule

Evaluation rule model.
NameTypeDescriptionRequiredDefault
actionobjectEvaluation action model.Yes
└─ typeEvaluationRuleActionTypeType of the evaluation action.No
descriptionstringDescription for the evaluation rule.No
displayNamestringDisplay Name for the evaluation rule.No
enabledbooleanIndicates whether the evaluation rule is enabled. Default is true.Yes
eventTypeobjectType of the evaluation rule event.Yes
filterobjectEvaluation filter model.No
└─ agentNamestringFilter by agent name.No
idstringUnique identifier for the evaluation rule.Yes
systemDataobjectSystem metadata for the evaluation rule.Yes

EvaluationRuleAction

Evaluation action model.

Discriminator for EvaluationRuleAction

This component uses the property type to discriminate between different types:
Type ValueSchema
continuousEvaluationContinuousEvaluationRuleAction
humanEvaluationHumanEvaluationRuleAction
NameTypeDescriptionRequiredDefault
typeobjectType of the evaluation action.Yes

EvaluationRuleActionType

Type of the evaluation action.
PropertyValue
DescriptionType of the evaluation action.
Typestring
ValuescontinuousEvaluation
humanEvaluation

EvaluationRuleEventType

Type of the evaluation rule event.
PropertyValue
DescriptionType of the evaluation rule event.
Typestring
ValuesresponseCompleted
manual

EvaluationRuleFilter

Evaluation filter model.
NameTypeDescriptionRequiredDefault
agentNamestringFilter by agent name.Yes

EvaluationRunClusterInsightResult

Insights from the evaluation run cluster analysis.
NameTypeDescriptionRequiredDefault
clusterInsightClusterInsightResultInsights from the cluster analysis.Yes
typeenumThe type of insights result.
Possible values: EvaluationRunClusterInsight
Yes

EvaluationRunClusterInsightsRequest

Insights on set of Evaluation Results
NameTypeDescriptionRequiredDefault
evalIdstringEvaluation Id for the insights.Yes
modelConfigurationobjectConfiguration of the model used in the insight generation.No
└─ modelDeploymentNamestringThe model deployment to be evaluated. Accepts either the deployment name alone or with the connection name as {connectionName}/<modelDeploymentName>.No
runIdsarrayList of evaluation run IDs for the insights.Yes
typeenumThe type of insights request.
Possible values: EvaluationRunClusterInsight
Yes

EvaluationScheduleTask

Evaluation task for the schedule.
NameTypeDescriptionRequiredDefault
evalIdstringIdentifier of the evaluation group.Yes
evalRunobjectThe evaluation run payload.Yes
typeenum
Possible values: Evaluation
Yes

EvaluationTaxonomy

Evaluation Taxonomy Definition
NameTypeDescriptionRequiredDefault
idstringAsset ID, a unique identifier for the assetNo
namestringThe name of the resourceYes
propertiesobjectAdditional properties for the evaluation taxonomy.No
taxonomyCategoriesarrayList of taxonomy categories.No
taxonomyInputobjectInput configuration for the evaluation taxonomy.Yes
└─ typeEvaluationTaxonomyInputTypeInput type of the evaluation taxonomy.No
versionstringThe version of the resourceYes

EvaluationTaxonomyCreateOrUpdate

Evaluation Taxonomy Definition
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
propertiesobjectAdditional properties for the evaluation taxonomy.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
taxonomyCategoriesarrayList of taxonomy categories.No
taxonomyInputobjectInput configuration for the evaluation taxonomy.Yes
└─ typeEvaluationTaxonomyInputTypeInput type of the evaluation taxonomy.No

EvaluationTaxonomyInput

Input configuration for the evaluation taxonomy.

Discriminator for EvaluationTaxonomyInput

This component uses the property type to discriminate between different types:
Type ValueSchema
agentAgentTaxonomyInput
NameTypeDescriptionRequiredDefault
typeobjectType of the evaluation taxonomy input.Yes

EvaluationTaxonomyInputType

Type of the evaluation taxonomy input.
PropertyValue
DescriptionType of the evaluation taxonomy input.
Typestring
Valuesagent
policy

EvaluationTaxonomyInputUpdate

Input configuration for the evaluation taxonomy.

Discriminator for EvaluationTaxonomyInputUpdate

This component uses the property type to discriminate between different types:
Type ValueSchema
agentAgentTaxonomyInputUpdate
NameTypeDescriptionRequiredDefault
typeobjectType of the evaluation taxonomy input.Yes

EvaluationTaxonomyUpdate

Evaluation Taxonomy Definition
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
propertiesobjectAdditional properties for the evaluation taxonomy.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
taxonomyCategoriesarrayList of taxonomy categories.No
taxonomyInputobjectInput configuration for the evaluation taxonomy.No
└─ typeEvaluationTaxonomyInputTypeInput type of the evaluation taxonomy.No

EvaluatorCategory

The category of the evaluator
PropertyValue
DescriptionThe category of the evaluator
Typestring
Valuesquality
safety
agents

EvaluatorDefinition

Base evaluator configuration with discriminator

Discriminator for EvaluatorDefinition

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
data_schemaThe JSON schema (Draft 2020-12) for the evaluator’s input data. This includes parameters like type, properties, required.No
init_parametersThe JSON schema (Draft 2020-12) for the evaluator’s input parameters. This includes parameters like type, properties, required.No
metricsobjectList of output metrics produced by this evaluatorNo
typeobjectThe type of evaluator definitionYes

EvaluatorDefinitionType

The type of evaluator definition
PropertyValue
DescriptionThe type of evaluator definition
Typestring
Valuesprompt
code
prompt_and_code
service
openai_graders

EvaluatorMetric

Evaluator Metric
NameTypeDescriptionRequiredDefault
desirable_directionobjectThe direction of the metric indicating whether a higher value is better, a lower value is better, or neutralNo
is_primarybooleanIndicates if this metric is primary when there are multiple metrics.No
max_valuenumberMaximum value for the metric. If not specified, it is assumed to be unbounded.No
min_valuenumberMinimum value for the metricNo
typeobjectThe type of the evaluatorNo

EvaluatorMetricDirection

The direction of the metric indicating whether a higher value is better, a lower value is better, or neutral
PropertyValue
DescriptionThe direction of the metric indicating whether a higher value is better, a lower value is better, or neutral
Typestring
Valuesincrease
decrease
neutral

EvaluatorMetricType

The type of the evaluator
PropertyValue
DescriptionThe type of the evaluator
Typestring
Valuesordinal
continuous
boolean

EvaluatorType

The type of the evaluator
PropertyValue
DescriptionThe type of the evaluator
Typestring
Valuesbuiltin
custom

EvaluatorVersion

Evaluator Definition
NameTypeDescriptionRequiredDefault
categoriesarrayThe categories of the evaluatorYes
created_atintegerCreation date/time of the evaluatorYes
created_bystringCreator of the evaluatorYes
definitionobjectBase evaluator configuration with discriminatorYes
└─ data_schemaThe JSON schema (Draft 2020-12) for the evaluator’s input data. This includes parameters like type, properties, required.No
└─ init_parametersThe JSON schema (Draft 2020-12) for the evaluator’s input parameters. This includes parameters like type, properties, required.No
└─ metricsobjectList of output metrics produced by this evaluatorNo
└─ typeEvaluatorDefinitionTypeThe type of evaluator definitionNo
display_namestringDisplay Name for evaluator. It helps to find the evaluator easily in Foundry. It does not need to be unique.No
evaluator_typeobjectThe type of the evaluatorYes
idstringAsset ID, a unique identifier for the assetNo
metadataobjectMetadata about the evaluatorNo
modified_atintegerLast modified date/time of the evaluatorYes
namestringThe name of the resourceYes
versionstringThe version of the resourceYes

EvaluatorVersionCreate

Evaluator Definition
NameTypeDescriptionRequiredDefault
categoriesarrayThe categories of the evaluatorYes
definitionobjectBase evaluator configuration with discriminatorYes
└─ data_schemaThe JSON schema (Draft 2020-12) for the evaluator’s input data. This includes parameters like type, properties, required.No
└─ init_parametersThe JSON schema (Draft 2020-12) for the evaluator’s input parameters. This includes parameters like type, properties, required.No
└─ metricsobjectList of output metrics produced by this evaluatorNo
└─ typeEvaluatorDefinitionTypeThe type of evaluator definitionNo
descriptionstringThe asset description text.No
display_namestringDisplay Name for evaluator. It helps to find the evaluator easily in Foundry. It does not need to be unique.No
evaluator_typeobjectThe type of the evaluatorYes
metadataobjectMetadata about the evaluatorNo
tagsobjectTag dictionary. Tags can be added, removed, and updated.No

EvaluatorVersionUpdate

Evaluator Definition
NameTypeDescriptionRequiredDefault
categoriesarrayThe categories of the evaluatorNo
descriptionstringThe asset description text.No
display_namestringDisplay Name for evaluator. It helps to find the evaluator easily in Foundry. It does not need to be unique.No
metadataobjectMetadata about the evaluatorNo
tagsobjectTag dictionary. Tags can be added, removed, and updated.No

FabricDataAgentToolParameters

The fabric data agent tool parameters.
NameTypeDescriptionRequiredDefault
project_connectionsarrayThe project connections attached to this tool. There can be a maximum of 1 connection
resource attached to the tool.
No

FileDatasetVersion

FileDatasetVersion Definition
NameTypeDescriptionRequiredDefault
typeenumDataset type
Possible values: uri_file
Yes

FileDatasetVersionUpdate

FileDatasetVersion Definition
NameTypeDescriptionRequiredDefault
typeenumDataset type
Possible values: uri_file
Yes

FolderDatasetVersion

FileDatasetVersion Definition
NameTypeDescriptionRequiredDefault
typeenumDataset type
Possible values: uri_folder
Yes

FolderDatasetVersionUpdate

FileDatasetVersion Definition
NameTypeDescriptionRequiredDefault
typeenumDataset type
Possible values: uri_folder
Yes

HostedAgentDefinition

The hosted agent definition.

Discriminator for HostedAgentDefinition

This component uses the property kind to discriminate between different types:
Type ValueSchema
hostedImageBasedHostedAgentDefinition
NameTypeDescriptionRequiredDefault
container_protocol_versionsarrayThe protocols that the agent supports for ingress communication of the containers.Yes
cpustringThe CPU configuration for the hosted agent.Yes
environment_variablesobjectEnvironment variables to set in the hosted agent container.No
kindenum
Possible values: hosted
Yes
memorystringThe memory configuration for the hosted agent.Yes
toolsarrayAn array of tools the hosted agent’s model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
No

HourlyRecurrenceSchedule

Hourly recurrence schedule.
NameTypeDescriptionRequiredDefault
typeenum
Possible values: Hourly
Yes

HumanEvaluationRuleAction

Evaluation rule action for human evaluation.
NameTypeDescriptionRequiredDefault
templateIdobjectIdentifier of a saved asset.Yes
typeenum
Possible values: humanEvaluation
Yes

ImageBasedHostedAgentDefinition

The image-based deployment definition for a hosted agent.
NameTypeDescriptionRequiredDefault
imagestringThe image for the hosted agent.Yes
kindenum
Possible values: hosted
Yes

Index

Index resource Definition

Discriminator for Index

This component uses the property type to discriminate between different types:
Type ValueSchema
AzureSearchAzureAISearchIndex
ManagedAzureSearchManagedAzureAISearchIndex
CosmosDBNoSqlVectorStoreCosmosDBIndex
NameTypeDescriptionRequiredDefault
idstringAsset ID, a unique identifier for the assetNo
namestringThe name of the resourceYes
typeobjectYes
versionstringThe version of the resourceYes

IndexType

PropertyValue
Typestring
ValuesAzureSearch
CosmosDBNoSqlVectorStore
ManagedAzureSearch

IndexUpdate

Index resource Definition

Discriminator for IndexUpdate

This component uses the property type to discriminate between different types:
Type ValueSchema
AzureSearchAzureAISearchIndexUpdate
ManagedAzureSearchManagedAzureAISearchIndexUpdate
CosmosDBNoSqlVectorStoreCosmosDBIndexUpdate
NameTypeDescriptionRequiredDefault
descriptionstringThe asset description text.No
tagsobjectTag dictionary. Tags can be added, removed, and updated.No
typeobjectYes

Insight

The response body for cluster insights.
NameTypeDescriptionRequiredDefault
displayNamestringUser friendly display name for the insight.Yes
idstringThe unique identifier for the insights report.Yes
metadataobjectMetadata about the insights.Yes
└─ completedAtstringThe timestamp when the insights were completed.No
└─ createdAtstringThe timestamp when the insights were created.No
requestobjectThe request of the insights report.Yes
└─ typeInsightTypeThe type of request.No
resultobjectThe result of the insights.No
└─ typeInsightTypeThe type of insights result.No
stateobjectEnum describing allowed operation states.Yes

InsightCluster

A cluster of analysis samples.
NameTypeDescriptionRequiredDefault
descriptionstringDescription of the analysis cluster.Yes
idstringThe id of the analysis cluster.Yes
labelstringLabel for the clusterYes
samplesarrayList of samples that belong to this cluster. Empty if samples are part of subclusters.No
subClustersarrayList of subclusters within this cluster. Empty if no subclusters exist.No
suggestionstringSuggestion for the clusterYes
suggestionTitlestringThe title of the suggestion for the clusterYes
weightintegerThe weight of the analysis cluster. This indicate number of samples in the cluster.Yes

InsightModelConfiguration

Configuration of the model used in the insight generation.
NameTypeDescriptionRequiredDefault
modelDeploymentNamestringThe model deployment to be evaluated. Accepts either the deployment name alone or with the connection name as {connectionName}/<modelDeploymentName>.Yes

InsightRequest

The request of the insights report.

Discriminator for InsightRequest

This component uses the property type to discriminate between different types:
Type ValueSchema
EvaluationRunClusterInsightEvaluationRunClusterInsightsRequest
AgentClusterInsightAgentClusterInsightsRequest
EvaluationComparisonEvaluationComparisonRequest
NameTypeDescriptionRequiredDefault
typeobjectThe request of the insights.Yes

InsightResult

The result of the insights.

Discriminator for InsightResult

This component uses the property type to discriminate between different types:
Type ValueSchema
EvaluationComparisonEvalCompareReport
EvaluationRunClusterInsightEvaluationRunClusterInsightResult
AgentClusterInsightAgentClusterInsightResult
NameTypeDescriptionRequiredDefault
typeobjectThe request of the insights.Yes

InsightSample

A sample from the analysis.

Discriminator for InsightSample

This component uses the property type to discriminate between different types:
Type ValueSchema
EvaluationResultSampleEvaluationResultSample
NameTypeDescriptionRequiredDefault
correlationInfoobjectInfo about the correlation for the analysis sample.Yes
featuresobjectFeatures to help with additional filtering of data in UX.Yes
idstringThe unique identifier for the analysis sample.Yes
typeobjectThe type of sample used in the analysis.Yes

InsightScheduleTask

Insight task for the schedule.
NameTypeDescriptionRequiredDefault
insightobjectThe response body for cluster insights.Yes
└─ displayNamestringUser friendly display name for the insight.No
└─ idstringThe unique identifier for the insights report.No
└─ metadataInsightsMetadataMetadata about the insights report.No
└─ requestInsightRequestRequest for the insights analysis.No
└─ resultInsightResultThe result of the insights report.No
└─ stateAzure.Core.Foundations.OperationStateThe current state of the insights.No
typeenum
Possible values: Insight
Yes

InsightSummary

Summary of the error cluster analysis.
NameTypeDescriptionRequiredDefault
methodstringMethod used for clustering.Yes
sampleCountintegerTotal number of samples analyzed.Yes
uniqueClusterCountintegerTotal number of unique clusters.Yes
uniqueSubclusterCountintegerTotal number of unique subcluster labels.Yes
usageobjectToken usage for cluster analysisYes
└─ inputTokenUsageintegerinput token usageNo
└─ outputTokenUsageintegeroutput token usageNo
└─ totalTokenUsageintegertotal token usageNo

InsightType

The request of the insights.
PropertyValue
Typestring
ValuesEvaluationRunClusterInsight
AgentClusterInsight
EvaluationComparison

InsightsMetadata

Metadata about the insights.
NameTypeDescriptionRequiredDefault
completedAtstringThe timestamp when the insights were completed.No
createdAtstringThe timestamp when the insights were created.Yes

ItemGenerationParams

Represents the set of parameters used to control item generation operations.

Discriminator for ItemGenerationParams

This component uses the property type to discriminate between different types:
Type ValueSchema
NameTypeDescriptionRequiredDefault
typestringThe type of item generation parameters to use.Yes

ManagedAzureAISearchIndex

Managed Azure AI Search Index Definition
NameTypeDescriptionRequiredDefault
typeenumType of index
Possible values: ManagedAzureSearch
Yes

ManagedAzureAISearchIndexUpdate

Managed Azure AI Search Index Definition
NameTypeDescriptionRequiredDefault
typeenumType of index
Possible values: ManagedAzureSearch
Yes

MemoryItem

A single memory item stored in the memory store, containing content and metadata.

Discriminator for MemoryItem

This component uses the property kind to discriminate between different types:
Type ValueSchema
user_profileUserProfileMemoryItem
chat_summaryChatSummaryMemoryItem
NameTypeDescriptionRequiredDefault
contentstringThe content of the memory.Yes
kindobjectMemory item kind.Yes
memory_idstringThe unique ID of the memory item.Yes
scopestringThe namespace that logically groups and isolates memories, such as a user ID.Yes
updated_atintegerThe last update time of the memory item.Yes

MemoryItemKind

Memory item kind.
PropertyValue
DescriptionMemory item kind.
Typestring
Valuesuser_profile
chat_summary

MemoryOperation

Represents a single memory operation (create, update, or delete) performed on a memory item.
NameTypeDescriptionRequiredDefault
kindobjectMemory operation kind.Yes
memory_itemobjectA single memory item stored in the memory store, containing content and metadata.Yes
└─ contentstringThe content of the memory.No
└─ kindMemoryItemKindThe kind of the memory item.No
└─ memory_idstringThe unique ID of the memory item.No
└─ scopestringThe namespace that logically groups and isolates memories, such as a user ID.No
└─ updated_atintegerThe last update time of the memory item.No

MemoryOperationKind

Memory operation kind.
PropertyValue
DescriptionMemory operation kind.
Typestring
Valuescreate
update
delete

MemorySearchItem

A retrieved memory item from memory search.
NameTypeDescriptionRequiredDefault
memory_itemobjectA single memory item stored in the memory store, containing content and metadata.Yes
└─ contentstringThe content of the memory.No
└─ kindMemoryItemKindThe kind of the memory item.No
└─ memory_idstringThe unique ID of the memory item.No
└─ scopestringThe namespace that logically groups and isolates memories, such as a user ID.No
└─ updated_atintegerThe last update time of the memory item.No

MemorySearchOptions

Memory search options.
NameTypeDescriptionRequiredDefault
max_memoriesintegerMaximum number of memory items to return.No

MemorySearchTool

A tool for integrating memories into the agent.
NameTypeDescriptionRequiredDefault
memory_store_namestringThe name of the memory store to use.Yes
scopestringThe namespace used to group and isolate memories, such as a user ID.
Limits which memories can be retrieved or updated.
Use special variable {{$userId}} to scope memories to the current signed-in user.
Yes
search_optionsobjectMemory search options.No
└─ max_memoriesintegerMaximum number of memory items to return.No
typeenumThe type of the tool. Always memory_search.
Possible values: memory_search
Yes
update_delayintegerTime to wait before updating memories after inactivity (seconds). Default 300.No300

MemorySearchToolCallItemParam

NameTypeDescriptionRequiredDefault
resultsarrayThe results returned from the memory search.No
typeenum
Possible values: memory_search_call
Yes

MemorySearchToolCallItemResource

NameTypeDescriptionRequiredDefault
resultsarrayThe results returned from the memory search.No
statusenumThe status of the memory search tool call. One of in_progress,
searching, completed, incomplete or failed,
Possible values: in_progress, searching, completed, incomplete, failed
Yes
typeenum
Possible values: memory_search_call
Yes

MemoryStoreDefaultDefinition

Default memory store implementation.
NameTypeDescriptionRequiredDefault
chat_modelstringThe name or identifier of the chat completion model deployment used for memory processing.Yes
embedding_modelstringThe name or identifier of the embedding model deployment used for memory processing.Yes
kindenumThe kind of the memory store.
Possible values: default
Yes
optionsobjectDefault memory store configurations.No
└─ chat_summary_enabledbooleanWhether to enable chat summary extraction and storage. Default is true.NoTrue
└─ user_profile_detailsstringSpecific categories or types of user profile information to extract and store.No
└─ user_profile_enabledbooleanWhether to enable user profile extraction and storage. Default is true.NoTrue

MemoryStoreDefaultOptions

Default memory store configurations.
NameTypeDescriptionRequiredDefault
chat_summary_enabledbooleanWhether to enable chat summary extraction and storage. Default is true.YesTrue
user_profile_detailsstringSpecific categories or types of user profile information to extract and store.No
user_profile_enabledbooleanWhether to enable user profile extraction and storage. Default is true.YesTrue

MemoryStoreDefinition

Base definition for memory store configurations.

Discriminator for MemoryStoreDefinition

This component uses the property kind to discriminate between different types:
Type ValueSchema
defaultMemoryStoreDefaultDefinition
NameTypeDescriptionRequiredDefault
kindobjectThe type of memory store implementation to use.Yes

MemoryStoreDeleteScopeResponse

Response for deleting memories from a scope.
NameTypeDescriptionRequiredDefault
deletedbooleanWhether the deletion operation was successful.Yes
namestringThe name of the memory store.Yes
objectenumThe object type. Always ‘memory_store.scope.deleted’.
Possible values: memory_store.scope.deleted
Yes
scopestringThe scope from which memories were deleted.Yes

MemoryStoreKind

The type of memory store implementation to use.
PropertyValue
DescriptionThe type of memory store implementation to use.
Typestring
Valuesdefault

MemoryStoreObject

A memory store that can store and retrieve user memories.
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (seconds) when the memory store was created.Yes
definitionobjectBase definition for memory store configurations.Yes
└─ kindMemoryStoreKindThe kind of the memory store.No
descriptionstringA human-readable description of the memory store.No
idstringThe unique identifier of the memory store.Yes
metadataobjectArbitrary key-value metadata to associate with the memory store.No
namestringThe name of the memory store.Yes
objectenumThe object type, which is always ‘memory_store’.
Possible values: memory_store
Yes
updated_atintegerThe Unix timestamp (seconds) when the memory store was last updated.Yes

MemoryStoreOperationUsage

Usage statistics of a memory store operation.
NameTypeDescriptionRequiredDefault
embedding_tokensintegerThe number of embedding tokens.Yes
input_tokensintegerThe number of input tokens.Yes
input_tokens_detailsobjectA detailed breakdown of the input tokens.Yes
└─ cached_tokensintegerThe number of tokens that were retrieved from the cache.
More on prompt caching.
No
output_tokensintegerThe number of output tokens.Yes
output_tokens_detailsobjectA detailed breakdown of the output tokens.Yes
└─ reasoning_tokensintegerThe number of reasoning tokens.No
total_tokensintegerThe total number of tokens used.Yes

MemoryStoreSearchResponse

Memory search response.
NameTypeDescriptionRequiredDefault
memoriesarrayRelated memory items found during the search operation.Yes
search_idstringThe unique ID of this search request. Use this value as previous_search_id in subsequent requests to perform incremental searches.Yes
usageobjectUsage statistics of a memory store operation.Yes
└─ embedding_tokensintegerThe number of embedding tokens.No
└─ input_tokensintegerThe number of input tokens.No
└─ input_tokens_detailsobjectA detailed breakdown of the input tokens.No
└─ cached_tokensintegerThe number of tokens that were retrieved from the cache.
More on prompt caching.
No
└─ output_tokensintegerThe number of output tokens.No
└─ output_tokens_detailsobjectA detailed breakdown of the output tokens.No
└─ reasoning_tokensintegerThe number of reasoning tokens.No
└─ total_tokensintegerThe total number of tokens used.No

MemoryStoreUpdateCompletedResult

Memory update result.
NameTypeDescriptionRequiredDefault
memory_operationsarrayA list of individual memory operations that were performed during the update.Yes
usageobjectUsage statistics of a memory store operation.Yes
└─ embedding_tokensintegerThe number of embedding tokens.No
└─ input_tokensintegerThe number of input tokens.No
└─ input_tokens_detailsobjectA detailed breakdown of the input tokens.No
└─ cached_tokensintegerThe number of tokens that were retrieved from the cache.
More on prompt caching.
No
└─ output_tokensintegerThe number of output tokens.No
└─ output_tokens_detailsobjectA detailed breakdown of the output tokens.No
└─ reasoning_tokensintegerThe number of reasoning tokens.No
└─ total_tokensintegerThe total number of tokens used.No

MemoryStoreUpdateResponse

Provides the status of a memory store update operation.
NameTypeDescriptionRequiredDefault
errorobjectNo
└─ additionalInfoobjectNo
└─ codestringNo
└─ debugInfoobjectNo
└─ detailsarrayNo
└─ messagestringNo
└─ paramstringNo
└─ typestringNo
resultobjectMemory update result.No
└─ memory_operationsarrayA list of individual memory operations that were performed during the update.No
└─ usageMemoryStoreOperationUsageUsage statistics associated with the memory update operation.No
statusobjectStatus of a memory store update operation.Yes
superseded_bystringThe update_id the operation was superseded by when status is “superseded”.No
update_idstringThe unique ID of this update request. Use this value as previous_update_id in subsequent requests to perform incremental updates.Yes

MemoryStoreUpdateStatus

Status of a memory store update operation.
PropertyValue
DescriptionStatus of a memory store update operation.
Typestring
Valuesqueued
in_progress
completed
failed
superseded

MicrosoftFabricAgentTool

The input definition information for a Microsoft Fabric tool as used to configure an agent.
NameTypeDescriptionRequiredDefault
fabric_dataagent_previewobjectThe fabric data agent tool parameters.Yes
└─ project_connectionsarrayThe project connections attached to this tool. There can be a maximum of 1 connection
resource attached to the tool.
No
typeenumThe object type, which is always ‘fabric_dataagent’.
Possible values: fabric_dataagent_preview
Yes

ModelDeployment

Model Deployment Definition
NameTypeDescriptionRequiredDefault
capabilitiesobjectCapabilities of deployed modelYes
connectionNamestringName of the connection the deployment comes fromNo
modelNamestringPublisher-specific name of the deployed modelYes
modelPublisherstringName of the deployed model’s publisherYes
modelVersionstringPublisher-specific version of the deployed modelYes
skuobjectSku informationYes
└─ capacityintegerSku capacityNo
└─ familystringSku familyNo
└─ namestringSku nameNo
└─ sizestringSku sizeNo
└─ tierstringSku tierNo
typeenumThe type of the deployment
Possible values: ModelDeployment
Yes

ModelSamplingParams

Represents a set of parameters used to control the sampling behavior of a language model during text generation.
NameTypeDescriptionRequiredDefault
max_completion_tokensintegerThe maximum number of tokens allowed in the completion.Yes
seedintegerThe random seed for reproducibility.Yes
temperaturenumberThe temperature parameter for sampling.Yes
top_pnumberThe top-p parameter for nucleus sampling.Yes

ModelSamplingParamsUpdate

Represents a set of parameters used to control the sampling behavior of a language model during text generation.
NameTypeDescriptionRequiredDefault
max_completion_tokensintegerThe maximum number of tokens allowed in the completion.No
seedintegerThe random seed for reproducibility.No
temperaturenumberThe temperature parameter for sampling.No
top_pnumberThe top-p parameter for nucleus sampling.No

MonthlyRecurrenceSchedule

Monthly recurrence schedule.
NameTypeDescriptionRequiredDefault
daysOfMontharrayDays of the month for the recurrence schedule.Yes
typeenumMonthly recurrence type.
Possible values: Monthly
Yes

NoAuthenticationCredentials

Credentials that do not require authentication
NameTypeDescriptionRequiredDefault
typeenumThe credential type
Possible values: None
Yes

OAuthConsentRequestItemResource

Request from the service for the user to perform OAuth consent.
NameTypeDescriptionRequiredDefault
consent_linkstringThe link the user can use to perform OAuth consent.Yes
idstringYes
server_labelstringThe server label for the OAuth consent request.Yes
typeenum
Possible values: oauth_consent_request
Yes

OneTimeTrigger

One-time trigger.
NameTypeDescriptionRequiredDefault
timeZonestringTime zone for the one-time trigger.NoUTC
triggerAtstringDate and time for the one-time trigger in ISO 8601 format.Yes
typeenum
Possible values: OneTime
Yes

OpenAI.Annotation

Discriminator for OpenAI.Annotation

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.AnnotationTypeYes

OpenAI.AnnotationFileCitation

A citation to a file.
NameTypeDescriptionRequiredDefault
file_idstringThe ID of the file.Yes
filenamestringThe filename of the file cited.Yes
indexintegerThe index of the file in the list of files.Yes
typeenumThe type of the file citation. Always file_citation.
Possible values: file_citation
Yes

OpenAI.AnnotationFilePath

A path to a file.
NameTypeDescriptionRequiredDefault
file_idstringThe ID of the file.Yes
indexintegerThe index of the file in the list of files.Yes
typeenumThe type of the file path. Always file_path.
Possible values: file_path
Yes

OpenAI.AnnotationType

PropertyValue
Typestring
Valuesfile_citation
url_citation
file_path
container_file_citation

OpenAI.AnnotationUrlCitation

A citation for a web resource used to generate a model response.
NameTypeDescriptionRequiredDefault
end_indexintegerThe index of the last character of the URL citation in the message.Yes
start_indexintegerThe index of the first character of the URL citation in the message.Yes
titlestringThe title of the web resource.Yes
typeenumThe type of the URL citation. Always url_citation.
Possible values: url_citation
Yes
urlstringThe URL of the web resource.Yes

OpenAI.ApproximateLocation

NameTypeDescriptionRequiredDefault
citystringNo
countrystringNo
regionstringNo
timezonestringNo
typeenum
Possible values: approximate
Yes

OpenAI.ChatCompletionTool

A function tool that can be used to generate a response.
NameTypeDescriptionRequiredDefault
functionOpenAI.FunctionObjectYes
typeenumThe type of the tool. Currently, only function is supported.
Possible values: function
Yes

OpenAI.CodeInterpreterOutput

Discriminator for OpenAI.CodeInterpreterOutput

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.CodeInterpreterOutputTypeYes

OpenAI.CodeInterpreterOutputImage

The image output from the code interpreter.
NameTypeDescriptionRequiredDefault
typeenumThe type of the output. Always ‘image’.
Possible values: image
Yes
urlstringThe URL of the image output from the code interpreter.Yes

OpenAI.CodeInterpreterOutputLogs

The logs output from the code interpreter.
NameTypeDescriptionRequiredDefault
logsstringThe logs output from the code interpreter.Yes
typeenumThe type of the output. Always ‘logs’.
Possible values: logs
Yes

OpenAI.CodeInterpreterOutputType

PropertyValue
Typestring
Valueslogs
image

OpenAI.CodeInterpreterTool

A tool that runs Python code to help generate a response to a prompt.
NameTypeDescriptionRequiredDefault
containerobjectConfiguration for a code interpreter container. Optionally specify the IDs
of the files to run the code on.
Yes
└─ file_idsarrayAn optional list of uploaded files to make available to your code.No
└─ typeenumAlways auto.
Possible values: auto
No
typeenumThe type of the code interpreter tool. Always code_interpreter.
Possible values: code_interpreter
Yes

OpenAI.CodeInterpreterToolAuto

Configuration for a code interpreter container. Optionally specify the IDs of the files to run the code on.
NameTypeDescriptionRequiredDefault
file_idsarrayAn optional list of uploaded files to make available to your code.No
typeenumAlways auto.
Possible values: auto
Yes

OpenAI.CodeInterpreterToolCallItemParam

A tool call to run code.
NameTypeDescriptionRequiredDefault
codestringThe code to run, or null if not available.Yes
container_idstringThe ID of the container used to run the code.Yes
outputsarrayThe outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
Yes
typeenum
Possible values: code_interpreter_call
Yes

OpenAI.CodeInterpreterToolCallItemResource

A tool call to run code.
NameTypeDescriptionRequiredDefault
codestringThe code to run, or null if not available.Yes
container_idstringThe ID of the container used to run the code.Yes
outputsarrayThe outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
Yes
statusenum
Possible values: in_progress, completed, incomplete, interpreting, failed
Yes
typeenum
Possible values: code_interpreter_call
Yes

OpenAI.ComparisonFilter

A filter used to compare a specified attribute key to a given value using a defined comparison operation.
NameTypeDescriptionRequiredDefault
keystringThe key to compare against the value.Yes
typeenumSpecifies the comparison operator:
eq (equal), ne (not equal), gt (greater than), gte (greater than or equal), lt (less than), lte (less than or equal).
Possible values: eq, ne, gt, gte, lt, lte
Yes
valuestring or number or booleanYes

OpenAI.CompoundFilter

Combine multiple filters using and or or.
NameTypeDescriptionRequiredDefault
filtersarrayArray of filters to combine. Items can be ComparisonFilter or CompoundFilter.Yes
typeenumType of operation: and or or.
Possible values: and, or
Yes

OpenAI.ComputerAction

Discriminator for OpenAI.ComputerAction

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ComputerActionTypeYes

OpenAI.ComputerActionClick

A click action.
NameTypeDescriptionRequiredDefault
buttonenumIndicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
Possible values: left, right, wheel, back, forward
Yes
typeenumSpecifies the event type. For a click action, this property is
always set to click.
Possible values: click
Yes
xintegerThe x-coordinate where the click occurred.Yes
yintegerThe y-coordinate where the click occurred.Yes

OpenAI.ComputerActionDoubleClick

A double click action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a double click action, this property is
always set to double_click.
Possible values: double_click
Yes
xintegerThe x-coordinate where the double click occurred.Yes
yintegerThe y-coordinate where the double click occurred.Yes

OpenAI.ComputerActionDrag

A drag action.
NameTypeDescriptionRequiredDefault
patharrayAn array of coordinates representing the path of the drag action. Coordinates will appear as an array
of objects, eg
<br />[<br /> { x: 100, y: 200 },<br /> { x: 200, y: 300 }<br />]<br />
Yes
typeenumSpecifies the event type. For a drag action, this property is
always set to drag.
Possible values: drag
Yes

OpenAI.ComputerActionKeyPress

A collection of keypresses the model would like to perform.
NameTypeDescriptionRequiredDefault
keysarrayThe combination of keys the model is requesting to be pressed. This is an
array of strings, each representing a key.
Yes
typeenumSpecifies the event type. For a keypress action, this property is
always set to keypress.
Possible values: keypress
Yes

OpenAI.ComputerActionMove

A mouse move action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a move action, this property is
always set to move.
Possible values: move
Yes
xintegerThe x-coordinate to move to.Yes
yintegerThe y-coordinate to move to.Yes

OpenAI.ComputerActionScreenshot

A screenshot action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a screenshot action, this property is
always set to screenshot.
Possible values: screenshot
Yes

OpenAI.ComputerActionScroll

A scroll action.
NameTypeDescriptionRequiredDefault
scroll_xintegerThe horizontal scroll distance.Yes
scroll_yintegerThe vertical scroll distance.Yes
typeenumSpecifies the event type. For a scroll action, this property is
always set to scroll.
Possible values: scroll
Yes
xintegerThe x-coordinate where the scroll occurred.Yes
yintegerThe y-coordinate where the scroll occurred.Yes

OpenAI.ComputerActionType

PropertyValue
Typestring
Valuesscreenshot
click
double_click
scroll
type
wait
keypress
drag
move

OpenAI.ComputerActionTypeKeys

An action to type in text.
NameTypeDescriptionRequiredDefault
textstringThe text to type.Yes
typeenumSpecifies the event type. For a type action, this property is
always set to type.
Possible values: type
Yes

OpenAI.ComputerActionWait

A wait action.
NameTypeDescriptionRequiredDefault
typeenumSpecifies the event type. For a wait action, this property is
always set to wait.
Possible values: wait
Yes

OpenAI.ComputerToolCallItemParam

A tool call to a computer use tool. See the computer use guide for more information.
NameTypeDescriptionRequiredDefault
actionOpenAI.ComputerActionYes
call_idstringAn identifier used when responding to the tool call with output.Yes
pending_safety_checksarrayThe pending safety checks for the computer call.Yes
typeenum
Possible values: computer_call
Yes

OpenAI.ComputerToolCallItemResource

A tool call to a computer use tool. See the computer use guide for more information.
NameTypeDescriptionRequiredDefault
actionOpenAI.ComputerActionYes
call_idstringAn identifier used when responding to the tool call with output.Yes
pending_safety_checksarrayThe pending safety checks for the computer call.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: computer_call
Yes

OpenAI.ComputerToolCallOutputItemOutput

Discriminator for OpenAI.ComputerToolCallOutputItemOutput

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ComputerToolCallOutputItemOutputTypeA computer screenshot image used with the computer use tool.Yes

OpenAI.ComputerToolCallOutputItemOutputComputerScreenshot

NameTypeDescriptionRequiredDefault
file_idstringNo
image_urlstringNo
typeenum
Possible values: computer_screenshot
Yes

OpenAI.ComputerToolCallOutputItemOutputType

A computer screenshot image used with the computer use tool.
PropertyValue
DescriptionA computer screenshot image used with the computer use tool.
Typestring
Valuescomputer_screenshot

OpenAI.ComputerToolCallOutputItemParam

The output of a computer tool call.
NameTypeDescriptionRequiredDefault
acknowledged_safety_checksarrayThe safety checks reported by the API that have been acknowledged by the
developer.
No
call_idstringThe ID of the computer tool call that produced the output.Yes
outputOpenAI.ComputerToolCallOutputItemOutputYes
typeenum
Possible values: computer_call_output
Yes

OpenAI.ComputerToolCallOutputItemResource

The output of a computer tool call.
NameTypeDescriptionRequiredDefault
acknowledged_safety_checksarrayThe safety checks reported by the API that have been acknowledged by the
developer.
No
call_idstringThe ID of the computer tool call that produced the output.Yes
outputOpenAI.ComputerToolCallOutputItemOutputYes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: computer_call_output
Yes

OpenAI.ComputerToolCallSafetyCheck

A pending safety check for the computer call.
NameTypeDescriptionRequiredDefault
codestringThe type of the pending safety check.Yes
idstringThe ID of the pending safety check.Yes
messagestringDetails about the pending safety check.Yes

OpenAI.ComputerUsePreviewTool

A tool that controls a virtual computer.
NameTypeDescriptionRequiredDefault
display_heightintegerThe height of the computer display.Yes
display_widthintegerThe width of the computer display.Yes
environmentenumThe type of computer environment to control.
Possible values: windows, mac, linux, ubuntu, browser
Yes
typeenumThe type of the computer use tool. Always computer_use_preview.
Possible values: computer_use_preview
Yes

OpenAI.ConversationItemList

NameTypeDescriptionRequiredDefault
dataarrayYes
first_idstringYes
has_morebooleanYes
last_idstringYes
objectenum
Possible values: list
Yes

OpenAI.ConversationResource

NameTypeDescriptionRequiredDefault
created_atintegerYes
idstringThe unique ID of the conversation.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
objectenumThe object type, which is always ‘conversation’.
Possible values: conversation
Yes

OpenAI.Coordinate

An x/y coordinate pair, e.g. { x: 100, y: 200 }.
NameTypeDescriptionRequiredDefault
xintegerThe x-coordinate.Yes
yintegerThe y-coordinate.Yes

OpenAI.CreateConversationRequest

Create a conversation
NameTypeDescriptionRequiredDefault
itemsarrayInitial items to include the conversation context.
You may add up to 20 items at a time.
No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

OpenAI.CreateEvalCompletionsRunDataSource

A CompletionsRunDataSource object describing a model sampling configuration.
NameTypeDescriptionRequiredDefault
input_messagesobjectNo
└─ item_referencestringNo
└─ templatearrayNo
└─ typeenum
Possible values: item_reference
No
modelstringThe name of the model to use for generating completions (e.g. “o3-mini”).No
sampling_paramsOpenAI.CreateEvalCompletionsRunDataSourceSamplingParamsNo
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ created_afterOpenAI.integerNo
└─ created_beforeOpenAI.integerNo
└─ idstringThe identifier of the file.No
└─ limitOpenAI.integerNo
└─ metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringNo
└─ typeenumThe type of source. Always stored_completions.
Possible values: stored_completions
No
typeenumThe type of run data source. Always completions.
Possible values: completions
Yes

OpenAI.CreateEvalCompletionsRunDataSourceInputMessagesItemReference

NameTypeDescriptionRequiredDefault
item_referencestringYes
typeenum
Possible values: item_reference
Yes

OpenAI.CreateEvalCompletionsRunDataSourceInputMessagesTemplate

NameTypeDescriptionRequiredDefault
templatearrayYes
typeenum
Possible values: template
Yes

OpenAI.CreateEvalCompletionsRunDataSourceSamplingParams

NameTypeDescriptionRequiredDefault
max_completion_tokensOpenAI.integerNo
reasoning_effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
response_formatobjectDefault response format. Used to generate text responses.No
└─ json_schemaobjectStructured Outputs configuration options, including a JSON Schema.No
└─ descriptionstringA description of what the response format is for, used by the model to
determine how to respond in the format.
No
└─ namestringThe name of the response format. Must be a-z, A-Z, 0-9, or contain
underscores and dashes, with a maximum length of 64.
No
└─ schemaobjectNo
└─ strictbooleanWhether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide
.
NoFalse
└─ typeenumThe type of response format being defined. Always json_object.
Possible values: json_object
No
seedobjectNo
temperatureobjectNo
toolsarrayNo
top_pobjectNo

OpenAI.CreateEvalCustomDataSourceConfig

A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be:
  • Used to define your testing criteria and
  • What data is required when creating a run
NameTypeDescriptionRequiredDefault
include_sample_schemabooleanWhether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source)No
item_schemaobjectThe json schema for each row in the data source.Yes
typeenumThe type of data source. Always custom.
Possible values: custom
Yes

OpenAI.CreateEvalJsonlRunDataSource

A JsonlRunDataSource object with that specifies a JSONL file that matches the eval
NameTypeDescriptionRequiredDefault
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ idstringThe identifier of the file.No
└─ typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
No
typeenumThe type of data source. Always jsonl.
Possible values: jsonl
Yes

OpenAI.CreateEvalLogsDataSourceConfig

A data source config which specifies the metadata property of your logs query. This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
NameTypeDescriptionRequiredDefault
metadataobjectMetadata filters for the logs data source.No
typeenumThe type of data source. Always logs.
Possible values: logs
Yes

OpenAI.CreateEvalResponsesRunDataSource

A ResponsesRunDataSource object describing a model sampling configuration.
NameTypeDescriptionRequiredDefault
input_messagesobjectNo
└─ item_referencestringNo
└─ templatearrayNo
└─ typeenum
Possible values: item_reference
No
modelstringThe name of the model to use for generating completions (e.g. “o3-mini”).No
sampling_paramsOpenAI.CreateEvalResponsesRunDataSourceSamplingParamsNo
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ created_afterOpenAI.integerNo
└─ created_beforeOpenAI.integerNo
└─ idstringThe identifier of the file.No
└─ instructions_searchstringNo
└─ metadataobjectNo
└─ modelstringNo
└─ reasoning_effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
└─ temperatureOpenAI.numericNo
└─ toolsarrayNo
└─ top_pOpenAI.numericNo
└─ typeenumThe type of run data source. Always responses.
Possible values: responses
No
└─ usersarrayNo
typeenumThe type of run data source. Always responses.
Possible values: responses
Yes

OpenAI.CreateEvalResponsesRunDataSourceInputMessagesItemReference

NameTypeDescriptionRequiredDefault
item_referencestringYes
typeenum
Possible values: item_reference
Yes

OpenAI.CreateEvalResponsesRunDataSourceInputMessagesTemplate

NameTypeDescriptionRequiredDefault
templatearrayYes
typeenum
Possible values: template
Yes

OpenAI.CreateEvalResponsesRunDataSourceSamplingParams

NameTypeDescriptionRequiredDefault
max_completion_tokensOpenAI.integerNo
reasoning_effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
seedobjectNo
temperatureobjectNo
textOpenAI.CreateEvalResponsesRunDataSourceSamplingParamsTextNo
toolsarrayNo
top_pobjectNo

OpenAI.CreateEvalResponsesRunDataSourceSamplingParamsText

NameTypeDescriptionRequiredDefault
formatOpenAI.TextResponseFormatConfigurationAn object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the


The default format is { "type": "text" } with no additional options.

*Not recommended for gpt-4o and newer models:**

Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
No

OpenAI.CreateEvalStoredCompletionsDataSourceConfig

Deprecated in favor of LogsDataSourceConfig.
NameTypeDescriptionRequiredDefault
metadataobjectMetadata filters for the stored completions data source.No
typeenumThe type of data source. Always stored_completions.
Possible values: stored_completions
Yes

OpenAI.CreateFineTuningJobRequest

Valid models:
babbage-002
davinci-002
gpt-3.5-turbo
gpt-4o-mini
NameTypeDescriptionRequiredDefault
hyperparametersobjectThe hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of method, and should be passed in under the method parameter.
No
└─ batch_sizeenum
Possible values: auto
No
└─ learning_rate_multiplierenum
Possible values: auto
No
└─ n_epochsenum
Possible values: auto
No
integrationsarrayA list of integrations to enable for your fine-tuning job.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
methodOpenAI.FineTuneMethodThe method used for fine-tuning.No
modelstring (see valid models below)The name of the model to fine-tune. You can select one of the
supported models.
Yes
seedintegerThe seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.
If a seed is not specified, one will be generated for you.
No
suffixstringA string of up to 64 characters that will be added to your fine-tuned model name.

For example, a suffix of “custom-model-name” would produce a model name like ft:gpt-4o-mini:openai:custom-model-name:7p4lURel.
NoNone
training_filestringThe ID of an uploaded file that contains training data.



Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose fine-tune.

The contents of the file should differ depending on if the model uses the chat, completions format, or if the fine-tuning method uses the preference format.

See the fine-tuning guide for more details.
Yes
validation_filestringThe ID of an uploaded file that contains validation data.

If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed in
the fine-tuning results file.
The same data should not be present in both train and validation files.

Your dataset must be formatted as a JSONL file. You must upload your file with the purpose fine-tune.

See the fine-tuning guide for more details.
No

OpenAI.CreateFineTuningJobRequestIntegration

Discriminator for OpenAI.CreateFineTuningJobRequestIntegration

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typestring (see valid models below)Yes

OpenAI.CreateFineTuningJobRequestWandbIntegration

NameTypeDescriptionRequiredDefault
typeenum
Possible values: wandb
Yes
wandbobjectYes
└─ entitystringNo
└─ namestringNo
└─ projectstringNo
└─ tagsarrayNo

OpenAI.CreateResponse

NameTypeDescriptionRequiredDefault
agentobjectNo
└─ namestringThe name of the agent.No
└─ typeenum
Possible values: agent_reference
No
└─ versionstringThe version identifier of the agent.No
backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
conversationobjectNo
└─ idstringNo
includearraySpecify additional output data to include in the model response. Currently
supported values are:
- code_interpreter_call.outputs: Includes the outputs of python code execution
in code interpreter tool call items.
- computer_call_output.output.image_url: Include image urls from the computer call output.
- file_search_call.results: Include the search results of
the file search tool call.
- message.input_image.image_url: Include image urls from the input message.
- message.output_text.logprobs: Include logprobs with assistant messages.
- reasoning.encrypted_content: Includes an encrypted version of reasoning
tokens in reasoning item outputs. This enables reasoning items to be used in
multi-turn conversations when using the Responses API statelessly (like
when the store parameter is set to false, or when an organization is
enrolled in the zero data retention program).
No
inputstring or arrayNo
instructionsstringA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modelstringThe model deployment to use for the creation of this response.No
parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
promptobjectReference to a prompt template and its variables.
Learn more.
No
└─ idstringThe unique identifier of the prompt template to use.No
└─ variablesOpenAI.ResponsePromptVariablesOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
└─ versionstringOptional version of the prompt template.No
reasoningobjecto-series models only

Configuration options for reasoning models.
No
└─ effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
└─ generate_summaryenumDeprecated: use summary instead. A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
└─ summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
service_tierobjectSpecifies the processing type used for serving the request.
* If set to ‘auto’, then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
* If set to ‘default’, then the request will be processed with the standard
pricing and performance for the selected model.
* If set to ‘flex
or ‘priority’, then the request will be processed with the corresponding service
tier. Contact sales to learn more about Priority processing.
* When not set, the default behavior is ‘auto’.

When the service_tier parameter is set, the response body will include the service_tier
value based on the processing mode actually used to serve the request. This response value
may be different from the value set in the parameter.
No
storebooleanWhether to store the generated model response for later retrieval via
API.
NoTrue
streambooleanIf set to true, the model response data will be streamed to the client
as it is generated using server-sent events.

for more information.
NoFalse
structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No1
textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
tool_choiceobjectControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
No
└─ typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
No
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

- Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like file search.
- Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code.
No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No1
truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
userstringLearn more about safety best practices.No

OpenAI.DeletedConversationResource

NameTypeDescriptionRequiredDefault
deletedbooleanYes
idstringYes
objectenum
Possible values: conversation.deleted
Yes

OpenAI.EasyInputMessage

NameTypeDescriptionRequiredDefault
contentstring or arrayYes
rolestringYes

OpenAI.Error

NameTypeDescriptionRequiredDefault
additionalInfoobjectNo
codestringYes
debugInfoobjectNo
detailsarrayNo
messagestringYes
paramstringYes
typestringYes

OpenAI.EvalApiError

An object representing an error response from the Eval API.
NameTypeDescriptionRequiredDefault
codestringThe error code.Yes
messagestringThe error message.Yes

OpenAI.EvalGraderLabelModel

NameTypeDescriptionRequiredDefault
inputarrayYes
labelsarrayThe labels to assign to each item in the evaluation.Yes
modelstringThe model to use for the evaluation. Must support structured outputs.Yes
namestringThe name of the grader.Yes
passing_labelsarrayThe labels that indicate a passing result. Must be a subset of labels.Yes
typeenumThe object type, which is always label_model.
Possible values: label_model
Yes

OpenAI.EvalGraderPython

NameTypeDescriptionRequiredDefault
image_tagstringThe image tag to use for the python script.No
namestringThe name of the grader.Yes
pass_thresholdobjectNo
sourcestringThe source code of the python script.Yes
typeenumThe object type, which is always python.
Possible values: python
Yes

OpenAI.EvalGraderScoreModel

NameTypeDescriptionRequiredDefault
inputarrayThe input text. This may include template strings.Yes
modelstringThe model to use for the evaluation.Yes
namestringThe name of the grader.Yes
pass_thresholdobjectNo
rangearrayThe range of the score. Defaults to [0, 1].No
sampling_paramsobjectNo
└─ max_completions_tokensOpenAI.integerNo
└─ reasoning_effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
└─ seedOpenAI.integerNo
└─ temperatureOpenAI.numericNo
└─ top_pOpenAI.numericNo
typeenumThe object type, which is always score_model.
Possible values: score_model
Yes

OpenAI.EvalGraderScoreModelSamplingParams

NameTypeDescriptionRequiredDefault
max_completions_tokensobjectNo
reasoning_effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
seedobjectNo
temperatureobjectNo
top_pobjectNo

OpenAI.EvalGraderStringCheck

NameTypeDescriptionRequiredDefault
inputstringThe input text. This may include template strings.Yes
namestringThe name of the grader.Yes
operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
Yes
referencestringThe reference text. This may include template strings.Yes
typeenumThe object type, which is always string_check.
Possible values: string_check
Yes

OpenAI.EvalGraderTextSimilarity

NameTypeDescriptionRequiredDefault
evaluation_metricenumThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
Possible values: cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
Yes
inputstringThe text being graded.Yes
namestringThe name of the grader.Yes
pass_thresholdobjectYes
referencestringThe text being graded against.Yes
typeenumThe type of grader.
Possible values: text_similarity
Yes

OpenAI.EvalItem

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.
NameTypeDescriptionRequiredDefault
contentobjectA text input to the model.Yes
└─ datastringBase64-encoded audio data.No
└─ detailstringNo
└─ formatenumThe format of the audio data. Currently supported formats are mp3 and
wav.
Possible values: mp3, wav
No
└─ image_urlstringNo
└─ textstringNo
└─ typeenumThe type of the input item. Always input_audio.
Possible values: input_audio
No
roleenumThe role of the message input. One of user, assistant, system, or
developer.
Possible values: user, assistant, system, developer
Yes
typeenumThe type of the message input. Always message.
Possible values: message
No

OpenAI.EvalItemContentInputImage

NameTypeDescriptionRequiredDefault
detailstringNo
image_urlstringYes
typeenum
Possible values: input_image
Yes

OpenAI.EvalItemContentOutputText

NameTypeDescriptionRequiredDefault
textstringYes
typeenum
Possible values: output_text
Yes

OpenAI.EvalJsonlFileContentSource

NameTypeDescriptionRequiredDefault
contentarrayThe content of the jsonl file.Yes
typeenumThe type of jsonl source. Always file_content.
Possible values: file_content
Yes

OpenAI.EvalJsonlFileContentSourceContent

NameTypeDescriptionRequiredDefault
itemobjectYes
sampleobjectNo

OpenAI.EvalJsonlFileIdSource

NameTypeDescriptionRequiredDefault
idstringThe identifier of the file.Yes
typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
Yes

OpenAI.EvalResponsesSource

A EvalResponsesSource object describing a run data source configuration.
NameTypeDescriptionRequiredDefault
created_afterobjectNo
created_beforeobjectNo
instructions_searchstringNo
metadataobjectNo
modelstringNo
reasoning_effortobjectConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
temperatureobjectNo
toolsarrayNo
top_pobjectNo
typeenumThe type of run data source. Always responses.
Possible values: responses
Yes
usersarrayNo

OpenAI.EvalRunOutputItemSample

NameTypeDescriptionRequiredDefault
errorOpenAI.EvalApiErrorAn object representing an error response from the Eval API.Yes
finish_reasonstringYes
inputarrayYes
max_completion_tokensOpenAI.integerYes
modelstringYes
outputarrayYes
seedOpenAI.integerYes
temperatureOpenAI.numericYes
top_pOpenAI.numericYes
usageOpenAI.EvalRunOutputItemSampleUsageYes

OpenAI.EvalRunOutputItemSampleInput

NameTypeDescriptionRequiredDefault
contentstringYes
rolestringYes

OpenAI.EvalRunOutputItemSampleOutput

NameTypeDescriptionRequiredDefault
contentstringNo
rolestringNo

OpenAI.EvalRunOutputItemSampleUsage

NameTypeDescriptionRequiredDefault
cached_tokensOpenAI.integerYes
completion_tokensOpenAI.integerYes
prompt_tokensOpenAI.integerYes
total_tokensOpenAI.integerYes

OpenAI.EvalRunPerModelUsage

NameTypeDescriptionRequiredDefault
cached_tokensOpenAI.integerYes
completion_tokensOpenAI.integerYes
invocation_countOpenAI.integerYes
model_namestringYes
prompt_tokensOpenAI.integerYes
total_tokensOpenAI.integerYes

OpenAI.EvalRunPerTestingCriteriaResults

NameTypeDescriptionRequiredDefault
failedOpenAI.integerYes
passedOpenAI.integerYes
testing_criteriastringYes

OpenAI.EvalRunResultCounts

NameTypeDescriptionRequiredDefault
erroredOpenAI.integerYes
failedOpenAI.integerYes
passedOpenAI.integerYes
totalOpenAI.integerYes

OpenAI.EvalStoredCompletionsSource

A StoredCompletionsRunDataSource configuration describing a set of filters
NameTypeDescriptionRequiredDefault
created_afterobjectNo
created_beforeobjectNo
limitobjectNo
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
modelstringNo
typeenumThe type of source. Always stored_completions.
Possible values: stored_completions
Yes

OpenAI.FileSearchTool

A tool that searches for relevant content from uploaded files.
NameTypeDescriptionRequiredDefault
filtersobjectNo
max_num_resultsintegerThe maximum number of results to return. This number should be between 1 and 50 inclusive.No
ranking_optionsobjectNo
└─ rankerenumThe ranker to use for the file search.
Possible values: auto, default-2024-11-15
No
└─ score_thresholdnumberThe score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.No
typeenumThe type of the file search tool. Always file_search.
Possible values: file_search
Yes
vector_store_idsarrayThe IDs of the vector stores to search.Yes

OpenAI.FileSearchToolCallItemParam

The results of a file search tool call. See the file search guide for more information.
NameTypeDescriptionRequiredDefault
queriesarrayThe queries used to search for files.Yes
resultsarrayThe results of the file search tool call.No
typeenum
Possible values: file_search_call
Yes

OpenAI.FileSearchToolCallItemResource

The results of a file search tool call. See the file search guide for more information.
NameTypeDescriptionRequiredDefault
queriesarrayThe queries used to search for files.Yes
resultsarrayThe results of the file search tool call.No
statusenumThe status of the file search tool call. One of in_progress,
searching, incomplete or failed,
Possible values: in_progress, searching, completed, incomplete, failed
Yes
typeenum
Possible values: file_search_call
Yes

OpenAI.Filters

NameTypeDescriptionRequiredDefault
filtersarrayArray of filters to combine. Items can be ComparisonFilter or CompoundFilter.Yes
keystringThe key to compare against the value.Yes
typeenumType of operation: and or or.
Possible values: and, or
Yes
valuestring or number or booleanThe value to compare against the attribute key; supports string, number, or boolean types.Yes

OpenAI.FineTuneDPOHyperparameters

The hyperparameters used for the DPO fine-tuning job.
NameTypeDescriptionRequiredDefault
batch_sizeenum
Possible values: auto
No
betaenum
Possible values: auto
No
learning_rate_multiplierenum
Possible values: auto
No
n_epochsenum
Possible values: auto
No

OpenAI.FineTuneDPOMethod

Configuration for the DPO fine-tuning method.
NameTypeDescriptionRequiredDefault
hyperparametersOpenAI.FineTuneDPOHyperparametersThe hyperparameters used for the DPO fine-tuning job.No

OpenAI.FineTuneMethod

The method used for fine-tuning.
NameTypeDescriptionRequiredDefault
dpoOpenAI.FineTuneDPOMethodConfiguration for the DPO fine-tuning method.No
reinforcementOpenAI.FineTuneReinforcementMethodConfiguration for the reinforcement fine-tuning method.No
supervisedOpenAI.FineTuneSupervisedMethodConfiguration for the supervised fine-tuning method.No
typeenumThe type of method. Is either supervised, dpo, or reinforcement.
Possible values: supervised, dpo, reinforcement
Yes

OpenAI.FineTuneReinforcementHyperparameters

The hyperparameters used for the reinforcement fine-tuning job.
NameTypeDescriptionRequiredDefault
batch_sizeenum
Possible values: auto
No
compute_multiplierenum
Possible values: auto
No
eval_intervalenum
Possible values: auto
No
eval_samplesenum
Possible values: auto
No
learning_rate_multiplierenum
Possible values: auto
No
n_epochsenum
Possible values: auto
No
reasoning_effortenumLevel of reasoning effort.
Possible values: default, low, medium, high
No

OpenAI.FineTuneReinforcementMethod

Configuration for the reinforcement fine-tuning method.
NameTypeDescriptionRequiredDefault
graderobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.Yes
└─ calculate_outputstringA formula to calculate the output based on grader results.No
└─ evaluation_metricenumThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
Possible values: cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ gradersOpenAI.GraderStringCheck or OpenAI.GraderTextSimilarity or OpenAI.GraderPython or OpenAI.GraderScoreModel or OpenAI.GraderLabelModelA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.No
└─ image_tagstringThe image tag to use for the python script.No
└─ inputarrayThe input text. This may include template strings.No
└─ modelstringThe model to use for the evaluation.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsOpenAI.EvalGraderScoreModelSamplingParamsThe sampling parameters for the model.No
└─ sourcestringThe source code of the python script.No
└─ typeenumThe object type, which is always multi.
Possible values: multi
No
hyperparametersOpenAI.FineTuneReinforcementHyperparametersThe hyperparameters used for the reinforcement fine-tuning job.No

OpenAI.FineTuneSupervisedHyperparameters

The hyperparameters used for the fine-tuning job.
NameTypeDescriptionRequiredDefault
batch_sizeenum
Possible values: auto
No
learning_rate_multiplierenum
Possible values: auto
No
n_epochsenum
Possible values: auto
No

OpenAI.FineTuneSupervisedMethod

Configuration for the supervised fine-tuning method.
NameTypeDescriptionRequiredDefault
hyperparametersOpenAI.FineTuneSupervisedHyperparametersThe hyperparameters used for the fine-tuning job.No

OpenAI.FineTuningIntegration

Discriminator for OpenAI.FineTuningIntegration

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typestring (see valid models below)Yes

OpenAI.FineTuningIntegrationWandb

NameTypeDescriptionRequiredDefault
typeenumThe type of the integration being enabled for the fine-tuning job
Possible values: wandb
Yes
wandbobjectThe settings for your integration with Weights and Biases. This payload specifies the project that
metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags
to your run, and set a default entity (team, username, etc) to be associated with your run.
Yes
└─ entitystringThe entity to use for the run. This allows you to set the team or username of the WandB user that you would
like associated with the run. If not set, the default entity for the registered WandB API key is used.
No
└─ namestringA display name to set for the run. If not set, we will use the Job ID as the name.No
└─ projectstringThe name of the project that the new run will be created under.No
└─ tagsarrayA list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some
default tags are generated by OpenAI: “openai/finetune”, “openai/{base-model}”, “openai/{ftjob-abcdef}”.
No

OpenAI.FineTuningJob

The fine_tuning.job object represents a fine-tuning job that has been created through the API.
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the fine-tuning job was created.Yes
errorobjectFor fine-tuning jobs that have failed, this will contain more information on the cause of the failure.Yes
└─ codestringA machine-readable error code.No
└─ messagestringA human-readable error message.No
└─ paramstringThe parameter that was invalid, usually training_file or validation_file. This field will be null if the failure was not parameter-specific.No
estimated_finishintegerThe Unix timestamp (in seconds) for when the fine-tuning job is estimated to finish. The value will be null if the fine-tuning job is not running.No
fine_tuned_modelstringThe name of the fine-tuned model that is being created. The value will be null if the fine-tuning job is still running.Yes
finished_atintegerThe Unix timestamp (in seconds) for when the fine-tuning job was finished. The value will be null if the fine-tuning job is still running.Yes
hyperparametersobjectThe hyperparameters used for the fine-tuning job. This value will only be returned when running supervised jobs.Yes
└─ batch_sizeenum
Possible values: auto
No
└─ learning_rate_multiplierenum
Possible values: auto
No
└─ n_epochsenum
Possible values: auto
No
idstringThe object identifier, which can be referenced in the API endpoints.Yes
integrationsarrayA list of integrations to enable for this fine-tuning job.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
methodOpenAI.FineTuneMethodThe method used for fine-tuning.No
modelstringThe base model that is being fine-tuned.Yes
objectenumThe object type, which is always “fine_tuning.job”.
Possible values: fine_tuning.job
Yes
organization_idstringThe organization that owns the fine-tuning job.Yes
result_filesarrayThe compiled results file ID(s) for the fine-tuning job. You can retrieve the results with the Files API.Yes
seedintegerThe seed used for the fine-tuning job.Yes
statusenumThe current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.
Possible values: validating_files, queued, running, succeeded, failed, cancelled
Yes
trained_tokensintegerThe total number of billable tokens processed by this fine-tuning job. The value will be null if the fine-tuning job is still running.Yes
training_filestringThe file ID used for training. You can retrieve the training data with the Files API.Yes
user_provided_suffixstringThe descriptive suffix applied to the job, as specified in the job creation request.No
validation_filestringThe file ID used for validation. You can retrieve the validation results with the Files API.Yes

OpenAI.FineTuningJobCheckpoint

The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the checkpoint was created.Yes
fine_tuned_model_checkpointstringThe name of the fine-tuned checkpoint model that is created.Yes
fine_tuning_job_idstringThe name of the fine-tuning job that this checkpoint was created from.Yes
idstringThe checkpoint identifier, which can be referenced in the API endpoints.Yes
metricsobjectMetrics at the step number during the fine-tuning job.Yes
└─ full_valid_lossnumberNo
└─ full_valid_mean_token_accuracynumberNo
└─ stepnumberNo
└─ train_lossnumberNo
└─ train_mean_token_accuracynumberNo
└─ valid_lossnumberNo
└─ valid_mean_token_accuracynumberNo
objectenumThe object type, which is always “fine_tuning.job.checkpoint”.
Possible values: fine_tuning.job.checkpoint
Yes
step_numberintegerThe step number that the checkpoint was created at.Yes

OpenAI.FineTuningJobEvent

Fine-tuning job event object
NameTypeDescriptionRequiredDefault
created_atintegerThe Unix timestamp (in seconds) for when the fine-tuning job was created.Yes
dataThe data associated with the event.No
idstringThe object identifier.Yes
levelenumThe log level of the event.
Possible values: info, warn, error
Yes
messagestringThe message of the event.Yes
objectenumThe object type, which is always “fine_tuning.job.event”.
Possible values: fine_tuning.job.event
Yes
typeenumThe type of event.
Possible values: message, metrics
No

OpenAI.FunctionObject

NameTypeDescriptionRequiredDefault
descriptionstringA description of what the function does, used by the model to choose when and how to call the function.No
namestringThe name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.Yes
parametersThe parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Omitting parameters defines a function with an empty parameter list.
No
strictbooleanWhether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is true. Learn more about Structured Outputs in theNoFalse

OpenAI.FunctionTool

Defines a function in your own code the model can choose to call.
NameTypeDescriptionRequiredDefault
descriptionstringA description of the function. Used by the model to determine whether or not to call the function.No
namestringThe name of the function to call.Yes
parametersA JSON schema object describing the parameters of the function.Yes
strictbooleanWhether to enforce strict parameter validation. Default true.Yes
typeenumThe type of the function tool. Always function.
Possible values: function
Yes

OpenAI.FunctionToolCallItemParam

A tool call to run a function. See the function calling guide for more information.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments to pass to the function.Yes
call_idstringThe unique ID of the function tool call generated by the model.Yes
namestringThe name of the function to run.Yes
typeenum
Possible values: function_call
Yes

OpenAI.FunctionToolCallItemResource

A tool call to run a function. See the function calling guide for more information.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments to pass to the function.Yes
call_idstringThe unique ID of the function tool call generated by the model.Yes
namestringThe name of the function to run.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: function_call
Yes

OpenAI.FunctionToolCallOutputItemParam

The output of a function tool call.
NameTypeDescriptionRequiredDefault
call_idstringThe unique ID of the function tool call generated by the model.Yes
outputstringA JSON string of the output of the function tool call.Yes
typeenum
Possible values: function_call_output
Yes

OpenAI.FunctionToolCallOutputItemResource

The output of a function tool call.
NameTypeDescriptionRequiredDefault
call_idstringThe unique ID of the function tool call generated by the model.Yes
outputstringA JSON string of the output of the function tool call.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: function_call_output
Yes

OpenAI.GraderLabelModel

A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
NameTypeDescriptionRequiredDefault
inputarrayYes
labelsarrayThe labels to assign to each item in the evaluation.Yes
modelstringThe model to use for the evaluation. Must support structured outputs.Yes
namestringThe name of the grader.Yes
passing_labelsarrayThe labels that indicate a passing result. Must be a subset of labels.Yes
typeenumThe object type, which is always label_model.
Possible values: label_model
Yes

OpenAI.GraderMulti

A MultiGrader object combines the output of multiple graders to produce a single score.
NameTypeDescriptionRequiredDefault
calculate_outputstringA formula to calculate the output based on grader results.Yes
gradersobjectA StringCheckGrader object that performs a string comparison between input and reference using a specified operation.Yes
└─ evaluation_metricenumThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
Possible values: cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
No
└─ image_tagstringThe image tag to use for the python script.No
└─ inputarrayNo
└─ labelsarrayThe labels to assign to each item in the evaluation.No
└─ modelstringThe model to use for the evaluation. Must support structured outputs.No
└─ namestringThe name of the grader.No
└─ operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
No
└─ passing_labelsarrayThe labels that indicate a passing result. Must be a subset of labels.No
└─ rangearrayThe range of the score. Defaults to [0, 1].No
└─ referencestringThe text being graded against.No
└─ sampling_paramsOpenAI.EvalGraderScoreModelSamplingParamsThe sampling parameters for the model.No
└─ sourcestringThe source code of the python script.No
└─ typeenumThe object type, which is always label_model.
Possible values: label_model
No
namestringThe name of the grader.Yes
typeenumThe object type, which is always multi.
Possible values: multi
Yes

OpenAI.GraderPython

A PythonGrader object that runs a python script on the input.
NameTypeDescriptionRequiredDefault
image_tagstringThe image tag to use for the python script.No
namestringThe name of the grader.Yes
sourcestringThe source code of the python script.Yes
typeenumThe object type, which is always python.
Possible values: python
Yes

OpenAI.GraderScoreModel

A ScoreModelGrader object that uses a model to assign a score to the input.
NameTypeDescriptionRequiredDefault
inputarrayThe input text. This may include template strings.Yes
modelstringThe model to use for the evaluation.Yes
namestringThe name of the grader.Yes
rangearrayThe range of the score. Defaults to [0, 1].No
sampling_paramsobjectNo
└─ max_completions_tokensOpenAI.integerNo
└─ reasoning_effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
└─ seedOpenAI.integerNo
└─ temperatureOpenAI.numericNo
└─ top_pOpenAI.numericNo
typeenumThe object type, which is always score_model.
Possible values: score_model
Yes

OpenAI.GraderStringCheck

A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
NameTypeDescriptionRequiredDefault
inputstringThe input text. This may include template strings.Yes
namestringThe name of the grader.Yes
operationenumThe string check operation to perform. One of eq, ne, like, or ilike.
Possible values: eq, ne, like, ilike
Yes
referencestringThe reference text. This may include template strings.Yes
typeenumThe object type, which is always string_check.
Possible values: string_check
Yes

OpenAI.GraderTextSimilarity

A TextSimilarityGrader object which grades text based on similarity metrics.
NameTypeDescriptionRequiredDefault
evaluation_metricenumThe evaluation metric to use. One of cosine, fuzzy_match, bleu,
gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5,
or rouge_l.
Possible values: cosine, fuzzy_match, bleu, gleu, meteor, rouge_1, rouge_2, rouge_3, rouge_4, rouge_5, rouge_l
Yes
inputstringThe text being graded.Yes
namestringThe name of the grader.Yes
referencestringThe text being graded against.Yes
typeenumThe type of grader.
Possible values: text_similarity
Yes

OpenAI.ImageGenTool

A tool that generates images using a model like gpt-image-1.
NameTypeDescriptionRequiredDefault
backgroundenumBackground type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Possible values: transparent, opaque, auto
No
input_image_maskobjectOptional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
No
└─ file_idstringFile ID for the mask image.No
└─ image_urlstringBase64-encoded mask image.No
modelenumThe image generation model to use. Default: gpt-image-1.
Possible values: gpt-image-1
No
moderationenumModeration level for the generated image. Default: auto.
Possible values: auto, low
No
output_compressionintegerCompression level for the output image. Default: 100.No100
output_formatenumThe output format of the generated image. One of png, webp, or
jpeg. Default: png.
Possible values: png, webp, jpeg
No
partial_imagesintegerNumber of partial images to generate in streaming mode, from 0 (default value) to 3.No0
qualityenumThe quality of the generated image. One of low, medium, high,
or auto. Default: auto.
Possible values: low, medium, high, auto
No
sizeenumThe size of the generated image. One of 1024x1024, 1024x1536,
1536x1024, or auto. Default: auto.
Possible values: 1024x1024, 1024x1536, 1536x1024, auto
No
typeenumThe type of the image generation tool. Always image_generation.
Possible values: image_generation
Yes

OpenAI.ImageGenToolCallItemParam

An image generation request made by the model.
NameTypeDescriptionRequiredDefault
resultstringThe generated image encoded in base64.Yes
typeenum
Possible values: image_generation_call
Yes

OpenAI.ImageGenToolCallItemResource

An image generation request made by the model.
NameTypeDescriptionRequiredDefault
resultstringThe generated image encoded in base64.Yes
statusenum
Possible values: in_progress, completed, generating, failed
Yes
typeenum
Possible values: image_generation_call
Yes

OpenAI.Includable

Specify additional output data to include in the model response. Currently supported values are:
  • code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.
  • computer_call_output.output.image_url: Include image urls from the computer call output.
  • file_search_call.results: Include the search results of the file search tool call.
  • message.input_image.image_url: Include image urls from the input message.
  • message.output_text.logprobs: Include logprobs with assistant messages.
  • reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).
PropertyValue
DescriptionSpecify additional output data to include in the model response. Currently
supported values are:
- code_interpreter_call.outputs: Includes the outputs of python code execution
in code interpreter tool call items.
- computer_call_output.output.image_url: Include image urls from the computer call output.
- file_search_call.results: Include the search results of
the file search tool call.
- message.input_image.image_url: Include image urls from the input message.
- message.output_text.logprobs: Include logprobs with assistant messages.
- reasoning.encrypted_content: Includes an encrypted version of reasoning
tokens in reasoning item outputs. This enables reasoning items to be used in
multi-turn conversations when using the Responses API statelessly (like
when the store parameter is set to false, or when an organization is
enrolled in the zero data retention program).
Typestring
Valuescode_interpreter_call.outputs
computer_call_output.output.image_url
file_search_call.results
message.input_image.image_url
message.output_text.logprobs
reasoning.encrypted_content
web_search_call.results
web_search_call.action.sources
memory_search_call.results

OpenAI.ItemContent

Discriminator for OpenAI.ItemContent

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ItemContentTypeMulti-modal input and output contents.Yes

OpenAI.ItemContentInputAudio

An audio input to the model.
NameTypeDescriptionRequiredDefault
datastringBase64-encoded audio data.Yes
formatenumThe format of the audio data. Currently supported formats are mp3 and
wav.
Possible values: mp3, wav
Yes
typeenumThe type of the input item. Always input_audio.
Possible values: input_audio
Yes

OpenAI.ItemContentInputFile

A file input to the model.
NameTypeDescriptionRequiredDefault
file_datastringThe content of the file to be sent to the model.No
file_idstringThe ID of the file to be sent to the model.No
filenamestringThe name of the file to be sent to the model.No
typeenumThe type of the input item. Always input_file.
Possible values: input_file
Yes

OpenAI.ItemContentInputImage

An image input to the model. Learn about image inputs.
NameTypeDescriptionRequiredDefault
detailenumThe detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
Possible values: low, high, auto
No
file_idstringThe ID of the file to be sent to the model.No
image_urlstringThe URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL.No
typeenumThe type of the input item. Always input_image.
Possible values: input_image
Yes

OpenAI.ItemContentInputText

A text input to the model.
NameTypeDescriptionRequiredDefault
textstringThe text input to the model.Yes
typeenumThe type of the input item. Always input_text.
Possible values: input_text
Yes

OpenAI.ItemContentOutputAudio

An audio output from the model.
NameTypeDescriptionRequiredDefault
datastringBase64-encoded audio data from the model.Yes
transcriptstringThe transcript of the audio data from the model.Yes
typeenumThe type of the output audio. Always output_audio.
Possible values: output_audio
Yes

OpenAI.ItemContentOutputText

A text output from the model.
NameTypeDescriptionRequiredDefault
annotationsarrayThe annotations of the text output.Yes
logprobsarrayNo
textstringThe text output from the model.Yes
typeenumThe type of the output text. Always output_text.
Possible values: output_text
Yes

OpenAI.ItemContentRefusal

A refusal from the model.
NameTypeDescriptionRequiredDefault
refusalstringThe refusal explanationfrom the model.Yes
typeenumThe type of the refusal. Always refusal.
Possible values: refusal
Yes

OpenAI.ItemContentType

Multi-modal input and output contents.
PropertyValue
DescriptionMulti-modal input and output contents.
Typestring
Valuesinput_text
input_audio
input_image
input_file
output_text
output_audio
refusal

OpenAI.ItemParam

Content item used to generate a response.

Discriminator for OpenAI.ItemParam

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ItemTypeYes

OpenAI.ItemReferenceItemParam

An internal identifier for an item to reference.
NameTypeDescriptionRequiredDefault
idstringThe service-originated ID of the previously generated response item being referenced.Yes
typeenum
Possible values: item_reference
Yes

OpenAI.ItemResource

Content item used to generate a response.

Discriminator for OpenAI.ItemResource

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
created_byobjectNo
└─ agentAgentIdThe agent that created the item.No
└─ response_idstringThe response on which the item is created.No
idstringYes
typeOpenAI.ItemTypeYes

OpenAI.ItemType

PropertyValue
Typestring
Valuesmessage
file_search_call
function_call
function_call_output
computer_call
computer_call_output
web_search_call
reasoning
item_reference
image_generation_call
code_interpreter_call
local_shell_call
local_shell_call_output
mcp_list_tools
mcp_approval_request
mcp_approval_response
mcp_call
structured_outputs
workflow_action
memory_search_call
oauth_consent_request

OpenAI.ListFineTuningJobCheckpointsResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
first_idstringNo
has_morebooleanYes
last_idstringNo
objectenum
Possible values: list
Yes

OpenAI.ListFineTuningJobEventsResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
has_morebooleanYes
objectenum
Possible values: list
Yes

OpenAI.ListPaginatedFineTuningJobsResponse

NameTypeDescriptionRequiredDefault
dataarrayYes
has_morebooleanYes
objectenum
Possible values: list
Yes

OpenAI.LocalShellExecAction

Execute a shell command on the server.
NameTypeDescriptionRequiredDefault
commandarrayThe command to run.Yes
envobjectEnvironment variables to set for the command.Yes
timeout_msintegerOptional timeout in milliseconds for the command.No
typeenumThe type of the local shell action. Always exec.
Possible values: exec
Yes
userstringOptional user to run the command as.No
working_directorystringOptional working directory to run the command in.No

OpenAI.LocalShellTool

A tool that allows the model to execute shell commands in a local environment.
NameTypeDescriptionRequiredDefault
typeenumThe type of the local shell tool. Always local_shell.
Possible values: local_shell
Yes

OpenAI.LocalShellToolCallItemParam

A tool call to run a command on the local shell.
NameTypeDescriptionRequiredDefault
actionOpenAI.LocalShellExecActionExecute a shell command on the server.Yes
call_idstringThe unique ID of the local shell tool call generated by the model.Yes
typeenum
Possible values: local_shell_call
Yes

OpenAI.LocalShellToolCallItemResource

A tool call to run a command on the local shell.
NameTypeDescriptionRequiredDefault
actionOpenAI.LocalShellExecActionExecute a shell command on the server.Yes
call_idstringThe unique ID of the local shell tool call generated by the model.Yes
statusenum
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: local_shell_call
Yes

OpenAI.LocalShellToolCallOutputItemParam

The output of a local shell tool call.
NameTypeDescriptionRequiredDefault
outputstringA JSON string of the output of the local shell tool call.Yes
typeenum
Possible values: local_shell_call_output
Yes

OpenAI.LocalShellToolCallOutputItemResource

The output of a local shell tool call.
NameTypeDescriptionRequiredDefault
outputstringA JSON string of the output of the local shell tool call.Yes
statusenum
Possible values: in_progress, completed, incomplete
Yes
typeenum
Possible values: local_shell_call_output
Yes

OpenAI.Location

Discriminator for OpenAI.Location

This component uses the property type to discriminate between different types:
Type ValueSchema
approximateOpenAI.ApproximateLocation
NameTypeDescriptionRequiredDefault
typeOpenAI.LocationTypeYes

OpenAI.LocationType

PropertyValue
Typestring
Valuesapproximate

OpenAI.LogProb

The log probability of a token.
NameTypeDescriptionRequiredDefault
bytesarrayYes
logprobnumberYes
tokenstringYes
top_logprobsarrayYes

OpenAI.MCPApprovalRequestItemParam

A request for human approval of a tool invocation.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of arguments for the tool.Yes
namestringThe name of the tool to run.Yes
server_labelstringThe label of the MCP server making the request.Yes
typeenum
Possible values: mcp_approval_request
Yes

OpenAI.MCPApprovalRequestItemResource

A request for human approval of a tool invocation.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of arguments for the tool.Yes
namestringThe name of the tool to run.Yes
server_labelstringThe label of the MCP server making the request.Yes
typeenum
Possible values: mcp_approval_request
Yes

OpenAI.MCPApprovalResponseItemParam

A response to an MCP approval request.
NameTypeDescriptionRequiredDefault
approval_request_idstringThe ID of the approval request being answered.Yes
approvebooleanWhether the request was approved.Yes
reasonstringOptional reason for the decision.No
typeenum
Possible values: mcp_approval_response
Yes

OpenAI.MCPApprovalResponseItemResource

A response to an MCP approval request.
NameTypeDescriptionRequiredDefault
approval_request_idstringThe ID of the approval request being answered.Yes
approvebooleanWhether the request was approved.Yes
reasonstringOptional reason for the decision.No
typeenum
Possible values: mcp_approval_response
Yes

OpenAI.MCPCallItemParam

An invocation of a tool on an MCP server.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments passed to the tool.Yes
errorstringThe error from the tool call, if any.No
namestringThe name of the tool that was run.Yes
outputstringThe output from the tool call.No
server_labelstringThe label of the MCP server running the tool.Yes
typeenum
Possible values: mcp_call
Yes

OpenAI.MCPCallItemResource

An invocation of a tool on an MCP server.
NameTypeDescriptionRequiredDefault
argumentsstringA JSON string of the arguments passed to the tool.Yes
errorstringThe error from the tool call, if any.No
namestringThe name of the tool that was run.Yes
outputstringThe output from the tool call.No
server_labelstringThe label of the MCP server running the tool.Yes
typeenum
Possible values: mcp_call
Yes

OpenAI.MCPListToolsItemParam

A list of tools available on an MCP server.
NameTypeDescriptionRequiredDefault
errorstringError message if the server could not list tools.No
server_labelstringThe label of the MCP server.Yes
toolsarrayThe tools available on the server.Yes
typeenum
Possible values: mcp_list_tools
Yes

OpenAI.MCPListToolsItemResource

A list of tools available on an MCP server.
NameTypeDescriptionRequiredDefault
errorstringError message if the server could not list tools.No
server_labelstringThe label of the MCP server.Yes
toolsarrayThe tools available on the server.Yes
typeenum
Possible values: mcp_list_tools
Yes

OpenAI.MCPListToolsTool

A tool available on an MCP server.
NameTypeDescriptionRequiredDefault
annotationsAdditional annotations about the tool.No
descriptionstringThe description of the tool.No
input_schemaThe JSON schema describing the tool’s input.Yes
namestringThe name of the tool.Yes

OpenAI.MCPTool

Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
NameTypeDescriptionRequiredDefault
allowed_toolsobjectNo
└─ tool_namesarrayList of allowed tool names.No
headersobjectOptional HTTP headers to send to the MCP server. Use for authentication
or other purposes.
No
project_connection_idstringThe connection ID in the project for the MCP server. The connection stores authentication and other connection details needed to connect to the MCP server.No
require_approvalobject (see valid models below)Specify which of the MCP server’s tools require approval.No
server_labelstringA label for this MCP server, used to identify it in tool calls.Yes
server_urlstringThe URL for the MCP server.Yes
typeenumThe type of the MCP tool. Always mcp.
Possible values: mcp
Yes

OpenAI.Metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Type: object

OpenAI.Prompt

Reference to a prompt template and its variables. Learn more.
NameTypeDescriptionRequiredDefault
idstringThe unique identifier of the prompt template to use.Yes
variablesobjectOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
versionstringOptional version of the prompt template.No

OpenAI.RankingOptions

NameTypeDescriptionRequiredDefault
rankerenumThe ranker to use for the file search.
Possible values: auto, default-2024-11-15
No
score_thresholdnumberThe score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer results.No

OpenAI.Reasoning

o-series models only Configuration options for reasoning models.
NameTypeDescriptionRequiredDefault
effortobjectConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
generate_summaryenumDeprecated: use summary instead. A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No

OpenAI.ReasoningEffort

Constrains effort on reasoning for reasoning models. Currently supported values are none, minimal, low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response. gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1. All models before gpt-5.1 default to medium reasoning effort, and do not support none. The gpt-5-pro model defaults to (and only supports) high reasoning effort.
PropertyValue
DescriptionConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
Typestring
Valuesnone
minimal
low
medium
high

OpenAI.ReasoningItemParam

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing conversation state.
NameTypeDescriptionRequiredDefault
encrypted_contentstringThe encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
No
summaryarrayReasoning text contents.Yes
typeenum
Possible values: reasoning
Yes

OpenAI.ReasoningItemResource

A description of the chain of thought used by a reasoning model while generating a response. Be sure to include these items in your input to the Responses API for subsequent turns of a conversation if you are manually managing conversation state.
NameTypeDescriptionRequiredDefault
encrypted_contentstringThe encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
No
summaryarrayReasoning text contents.Yes
typeenum
Possible values: reasoning
Yes

OpenAI.ReasoningItemSummaryPart

Discriminator for OpenAI.ReasoningItemSummaryPart

This component uses the property type to discriminate between different types:
Type ValueSchema
summary_textOpenAI.ReasoningItemSummaryTextPart
NameTypeDescriptionRequiredDefault
typeOpenAI.ReasoningItemSummaryPartTypeYes

OpenAI.ReasoningItemSummaryPartType

PropertyValue
Typestring
Valuessummary_text

OpenAI.ReasoningItemSummaryTextPart

NameTypeDescriptionRequiredDefault
textstringYes
typeenum
Possible values: summary_text
Yes

OpenAI.Response

NameTypeDescriptionRequiredDefault
agentobjectNo
└─ namestringThe name of the agent.No
└─ typeenum
Possible values: agent_id
No
└─ versionstringThe version identifier of the agent.No
backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
conversationobjectYes
└─ idstringNo
created_atintegerUnix timestamp (in seconds) of when this Response was created.Yes
errorobjectAn error object returned when the model fails to generate a Response.Yes
└─ codeOpenAI.ResponseErrorCodeThe error code for the response.No
└─ messagestringA human-readable description of the error.No
idstringUnique identifier for this Response.Yes
incomplete_detailsobjectDetails about why the response is incomplete.Yes
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
instructionsstring or arrayYes
max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
Yes
modelstringThe model deployment to use for the creation of this response.No
objectenumThe object type of this resource - always set to response.
Possible values: response
Yes
outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
Yes
output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.YesTrue
previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
promptobjectReference to a prompt template and its variables.
Learn more.
No
└─ idstringThe unique identifier of the prompt template to use.No
└─ variablesOpenAI.ResponsePromptVariablesOptional map of values to substitute in for variables in your
prompt. The substitution values can either be strings, or other
Response input types like images or files.
No
└─ versionstringOptional version of the prompt template.No
reasoningobjecto-series models only

Configuration options for reasoning models.
No
└─ effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
└─ generate_summaryenumDeprecated: use summary instead. A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
└─ summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
service_tierobjectSpecifies the processing type used for serving the request.
* If set to ‘auto’, then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
* If set to ‘default’, then the request will be processed with the standard
pricing and performance for the selected model.
* If set to ‘flex
or ‘priority’, then the request will be processed with the corresponding service
tier. Contact sales to learn more about Priority processing.
* When not set, the default behavior is ‘auto’.

When the service_tier parameter is set, the response body will include the service_tier
value based on the processing mode actually used to serve the request. This response value
may be different from the value set in the parameter.
No
statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Yes
textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
tool_choiceobjectControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
No
└─ typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
No
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
No
top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
Yes
truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
userstringLearn more about safety best practices.Yes

OpenAI.ResponseCodeInterpreterCallCodeDeltaEvent

Emitted when a partial code snippet is streamed by the code interpreter.
NameTypeDescriptionRequiredDefault
deltastringThe partial code snippet being streamed by the code interpreter.Yes
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code is being streamed.Yes
typeenumThe type of the event. Always response.code_interpreter_call_code.delta.
Possible values: response.code_interpreter_call_code.delta
Yes

OpenAI.ResponseCodeInterpreterCallCodeDoneEvent

Emitted when the code snippet is finalized by the code interpreter.
NameTypeDescriptionRequiredDefault
codestringThe final code snippet output by the code interpreter.Yes
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code is finalized.Yes
typeenumThe type of the event. Always response.code_interpreter_call_code.done.
Possible values: response.code_interpreter_call_code.done
Yes

OpenAI.ResponseCodeInterpreterCallCompletedEvent

Emitted when the code interpreter call is completed.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code interpreter call is completed.Yes
typeenumThe type of the event. Always response.code_interpreter_call.completed.
Possible values: response.code_interpreter_call.completed
Yes

OpenAI.ResponseCodeInterpreterCallInProgressEvent

Emitted when a code interpreter call is in progress.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code interpreter call is in progress.Yes
typeenumThe type of the event. Always response.code_interpreter_call.in_progress.
Possible values: response.code_interpreter_call.in_progress
Yes

OpenAI.ResponseCodeInterpreterCallInterpretingEvent

Emitted when the code interpreter is actively interpreting the code snippet.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the code interpreter tool call item.Yes
output_indexintegerThe index of the output item in the response for which the code interpreter is interpreting code.Yes
typeenumThe type of the event. Always response.code_interpreter_call.interpreting.
Possible values: response.code_interpreter_call.interpreting
Yes

OpenAI.ResponseCompletedEvent

Emitted when the model response is complete.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ agentAgentIdThe agent used for this responseNo
└─ backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
└─ conversationobjectNo
└─ idstringNo
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringThe model deployment to use for the creation of this response.No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
Learn more.
No
└─ reasoningOpenAI.Reasoningo-series models only

Configuration options for reasoning models.
No
└─ service_tierOpenAI.ServiceTierNote: service_tier is not applicable to Azure OpenAI.No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringLearn more about safety best practices.No
typeenumThe type of the event. Always response.completed.
Possible values: response.completed
Yes

OpenAI.ResponseContentPartAddedEvent

Emitted when a new content part is added.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that was added.Yes
item_idstringThe ID of the output item that the content part was added to.Yes
output_indexintegerThe index of the output item that the content part was added to.Yes
partobjectYes
└─ typeOpenAI.ItemContentTypeMulti-modal input and output contents.No
typeenumThe type of the event. Always response.content_part.added.
Possible values: response.content_part.added
Yes

OpenAI.ResponseContentPartDoneEvent

Emitted when a content part is done.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that is done.Yes
item_idstringThe ID of the output item that the content part was added to.Yes
output_indexintegerThe index of the output item that the content part was added to.Yes
partobjectYes
└─ typeOpenAI.ItemContentTypeMulti-modal input and output contents.No
typeenumThe type of the event. Always response.content_part.done.
Possible values: response.content_part.done
Yes

OpenAI.ResponseCreatedEvent

An event that is emitted when a response is created.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ agentAgentIdThe agent used for this responseNo
└─ backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
└─ conversationobjectNo
└─ idstringNo
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringThe model deployment to use for the creation of this response.No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
Learn more.
No
└─ reasoningOpenAI.Reasoningo-series models only

Configuration options for reasoning models.
No
└─ service_tierOpenAI.ServiceTierNote: service_tier is not applicable to Azure OpenAI.No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringLearn more about safety best practices.No
typeenumThe type of the event. Always response.created.
Possible values: response.created
Yes

OpenAI.ResponseError

An error object returned when the model fails to generate a Response.
NameTypeDescriptionRequiredDefault
codeOpenAI.ResponseErrorCodeThe error code for the response.Yes
messagestringA human-readable description of the error.Yes

OpenAI.ResponseErrorCode

The error code for the response.
PropertyValue
DescriptionThe error code for the response.
Typestring
Valuesserver_error
rate_limit_exceeded
invalid_prompt
vector_store_timeout
invalid_image
invalid_image_format
invalid_base64_image
invalid_image_url
image_too_large
image_too_small
image_parse_error
image_content_policy_violation
invalid_image_mode
image_file_too_large
unsupported_image_media_type
empty_image_file
failed_to_download_image
image_file_not_found

OpenAI.ResponseErrorEvent

Emitted when an error occurs.
NameTypeDescriptionRequiredDefault
codestringThe error code.Yes
messagestringThe error message.Yes
paramstringThe error parameter.Yes
typeenumThe type of the event. Always error.
Possible values: error
Yes

OpenAI.ResponseFailedEvent

An event that is emitted when a response fails.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ agentAgentIdThe agent used for this responseNo
└─ backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
└─ conversationobjectNo
└─ idstringNo
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringThe model deployment to use for the creation of this response.No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
Learn more.
No
└─ reasoningOpenAI.Reasoningo-series models only

Configuration options for reasoning models.
No
└─ service_tierOpenAI.ServiceTierNote: service_tier is not applicable to Azure OpenAI.No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringLearn more about safety best practices.No
typeenumThe type of the event. Always response.failed.
Possible values: response.failed
Yes

OpenAI.ResponseFileSearchCallCompletedEvent

Emitted when a file search call is completed (results found).
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the output item that the file search call is initiated.Yes
output_indexintegerThe index of the output item that the file search call is initiated.Yes
typeenumThe type of the event. Always response.file_search_call.completed.
Possible values: response.file_search_call.completed
Yes

OpenAI.ResponseFileSearchCallInProgressEvent

Emitted when a file search call is initiated.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the output item that the file search call is initiated.Yes
output_indexintegerThe index of the output item that the file search call is initiated.Yes
typeenumThe type of the event. Always response.file_search_call.in_progress.
Possible values: response.file_search_call.in_progress
Yes

OpenAI.ResponseFileSearchCallSearchingEvent

Emitted when a file search is currently searching.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the output item that the file search call is initiated.Yes
output_indexintegerThe index of the output item that the file search call is searching.Yes
typeenumThe type of the event. Always response.file_search_call.searching.
Possible values: response.file_search_call.searching
Yes

OpenAI.ResponseFormat

Discriminator for OpenAI.ResponseFormat

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeenum
Possible values: text, json_object, json_schema
Yes

OpenAI.ResponseFormatJsonObject

JSON object response format. An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.
NameTypeDescriptionRequiredDefault
typeenumThe type of response format being defined. Always json_object.
Possible values: json_object
Yes

OpenAI.ResponseFormatJsonSchema

The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
NameTypeDescriptionRequiredDefault
json_schemaobjectStructured Outputs configuration options, including a JSON Schema.Yes
└─ descriptionstringA description of what the response format is for, used by the model to
determine how to respond in the format.
No
└─ namestringThe name of the response format. Must be a-z, A-Z, 0-9, or contain
underscores and dashes, with a maximum length of 64.
No
└─ schemaobjectNo
└─ strictbooleanWhether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide
.
NoFalse
typeenumThe type of response format being defined. Always json_schema.
Possible values: json_schema
Yes

OpenAI.ResponseFormatText

Default response format. Used to generate text responses.
NameTypeDescriptionRequiredDefault
typeenumThe type of response format being defined. Always text.
Possible values: text
Yes

OpenAI.ResponseFunctionCallArgumentsDeltaEvent

Emitted when there is a partial function-call arguments delta.
NameTypeDescriptionRequiredDefault
deltastringThe function-call arguments delta that is added.Yes
item_idstringThe ID of the output item that the function-call arguments delta is added to.Yes
output_indexintegerThe index of the output item that the function-call arguments delta is added to.Yes
typeenumThe type of the event. Always response.function_call_arguments.delta.
Possible values: response.function_call_arguments.delta
Yes

OpenAI.ResponseFunctionCallArgumentsDoneEvent

Emitted when function-call arguments are finalized.
NameTypeDescriptionRequiredDefault
argumentsstringThe function-call arguments.Yes
item_idstringThe ID of the item.Yes
output_indexintegerThe index of the output item.Yes
typeenum
Possible values: response.function_call_arguments.done
Yes

OpenAI.ResponseImageGenCallCompletedEvent

Emitted when an image generation tool call has completed and the final image is available.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.image_generation_call.completed’.
Possible values: response.image_generation_call.completed
Yes

OpenAI.ResponseImageGenCallGeneratingEvent

Emitted when an image generation tool call is actively generating an image (intermediate state).
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.image_generation_call.generating’.
Possible values: response.image_generation_call.generating
Yes

OpenAI.ResponseImageGenCallInProgressEvent

Emitted when an image generation tool call is in progress.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.image_generation_call.in_progress’.
Possible values: response.image_generation_call.in_progress
Yes

OpenAI.ResponseImageGenCallPartialImageEvent

Emitted when a partial image is available during image generation streaming.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the image generation item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
partial_image_b64stringBase64-encoded partial image data, suitable for rendering as an image.Yes
partial_image_indexinteger0-based index for the partial image (backend is 1-based, but this is 0-based for the user).Yes
typeenumThe type of the event. Always ‘response.image_generation_call.partial_image’.
Possible values: response.image_generation_call.partial_image
Yes

OpenAI.ResponseInProgressEvent

Emitted when the response is in progress.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ agentAgentIdThe agent used for this responseNo
└─ backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
└─ conversationobjectNo
└─ idstringNo
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringThe model deployment to use for the creation of this response.No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
Learn more.
No
└─ reasoningOpenAI.Reasoningo-series models only

Configuration options for reasoning models.
No
└─ service_tierOpenAI.ServiceTierNote: service_tier is not applicable to Azure OpenAI.No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringLearn more about safety best practices.No
typeenumThe type of the event. Always response.in_progress.
Possible values: response.in_progress
Yes

OpenAI.ResponseIncompleteEvent

An event that is emitted when a response finishes as incomplete.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ agentAgentIdThe agent used for this responseNo
└─ backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
└─ conversationobjectNo
└─ idstringNo
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringThe model deployment to use for the creation of this response.No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
Learn more.
No
└─ reasoningOpenAI.Reasoningo-series models only

Configuration options for reasoning models.
No
└─ service_tierOpenAI.ServiceTierNote: service_tier is not applicable to Azure OpenAI.No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringLearn more about safety best practices.No
typeenumThe type of the event. Always response.incomplete.
Possible values: response.incomplete
Yes

OpenAI.ResponseMCPCallArgumentsDeltaEvent

Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
NameTypeDescriptionRequiredDefault
deltaThe partial update to the arguments for the MCP tool call.Yes
item_idstringThe unique identifier of the MCP tool call item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.mcp_call.arguments_delta’.
Possible values: response.mcp_call.arguments_delta
Yes

OpenAI.ResponseMCPCallArgumentsDoneEvent

Emitted when the arguments for an MCP tool call are finalized.
NameTypeDescriptionRequiredDefault
argumentsThe finalized arguments for the MCP tool call.Yes
item_idstringThe unique identifier of the MCP tool call item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.mcp_call.arguments_done’.
Possible values: response.mcp_call.arguments_done
Yes

OpenAI.ResponseMCPCallCompletedEvent

Emitted when an MCP tool call has completed successfully.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_call.completed’.
Possible values: response.mcp_call.completed
Yes

OpenAI.ResponseMCPCallFailedEvent

Emitted when an MCP tool call has failed.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_call.failed’.
Possible values: response.mcp_call.failed
Yes

OpenAI.ResponseMCPCallInProgressEvent

Emitted when an MCP tool call is in progress.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the MCP tool call item being processed.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.mcp_call.in_progress’.
Possible values: response.mcp_call.in_progress
Yes

OpenAI.ResponseMCPListToolsCompletedEvent

Emitted when the list of available MCP tools has been successfully retrieved.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_list_tools.completed’.
Possible values: response.mcp_list_tools.completed
Yes

OpenAI.ResponseMCPListToolsFailedEvent

Emitted when the attempt to list available MCP tools has failed.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_list_tools.failed’.
Possible values: response.mcp_list_tools.failed
Yes

OpenAI.ResponseMCPListToolsInProgressEvent

Emitted when the system is in the process of retrieving the list of available MCP tools.
NameTypeDescriptionRequiredDefault
typeenumThe type of the event. Always ‘response.mcp_list_tools.in_progress’.
Possible values: response.mcp_list_tools.in_progress
Yes

OpenAI.ResponseOutputItemAddedEvent

Emitted when a new output item is added.
NameTypeDescriptionRequiredDefault
itemobjectContent item used to generate a response.Yes
└─ created_byCreatedByThe information about the creator of the itemNo
└─ idstringNo
└─ typeOpenAI.ItemTypeNo
output_indexintegerThe index of the output item that was added.Yes
typeenumThe type of the event. Always response.output_item.added.
Possible values: response.output_item.added
Yes

OpenAI.ResponseOutputItemDoneEvent

Emitted when an output item is marked done.
NameTypeDescriptionRequiredDefault
itemobjectContent item used to generate a response.Yes
└─ created_byCreatedByThe information about the creator of the itemNo
└─ idstringNo
└─ typeOpenAI.ItemTypeNo
output_indexintegerThe index of the output item that was marked done.Yes
typeenumThe type of the event. Always response.output_item.done.
Possible values: response.output_item.done
Yes

OpenAI.ResponsePromptVariables

Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files. Type: object

OpenAI.ResponseQueuedEvent

Emitted when a response is queued and waiting to be processed.
NameTypeDescriptionRequiredDefault
responseobjectYes
└─ agentAgentIdThe agent used for this responseNo
└─ backgroundbooleanWhether to run the model response in the background.
Learn more about background responses.
NoFalse
└─ conversationobjectNo
└─ idstringNo
└─ created_atintegerUnix timestamp (in seconds) of when this Response was created.No
└─ errorOpenAI.ResponseErrorAn error object returned when the model fails to generate a Response.No
└─ idstringUnique identifier for this Response.No
└─ incomplete_detailsobjectDetails about why the response is incomplete.No
└─ reasonenumThe reason why the response is incomplete.
Possible values: max_output_tokens, content_filter
No
└─ instructionsstring or arrayA system (or developer) message inserted into the model’s context.

When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
No
└─ max_output_tokensintegerAn upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.No
└─ max_tool_callsintegerThe maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.No
└─ metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
└─ modelstringThe model deployment to use for the creation of this response.No
└─ objectenumThe object type of this resource - always set to response.
Possible values: response
No
└─ outputarrayAn array of content items generated by the model.

- The length and order of items in the output array is dependent
on the model’s response.
- Rather than accessing the first item in the output array and
assuming it’s an assistant message with the content generated by
the model, you might consider using the output_text property where
supported in SDKs.
No
└─ output_textstringSDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
No
└─ parallel_tool_callsbooleanWhether to allow the model to run tool calls in parallel.NoTrue
└─ previous_response_idstringThe unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
managing conversation state.
No
└─ promptOpenAI.PromptReference to a prompt template and its variables.
Learn more.
No
└─ reasoningOpenAI.Reasoningo-series models only

Configuration options for reasoning models.
No
└─ service_tierOpenAI.ServiceTierNote: service_tier is not applicable to Azure OpenAI.No
└─ statusenumThe status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Possible values: completed, failed, in_progress, cancelled, queued, incomplete
No
└─ structured_inputsobjectThe structured inputs to the response that can participate in prompt template substitution or tool argument bindings.No
└─ temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No
└─ textobjectConfiguration options for a text response from the model. Can be plain
text or structured JSON data. See Text inputs and outputs
and Structured Outputs
No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
└─ tool_choiceOpenAI.ToolChoiceOptions or OpenAI.ToolChoiceObjectHow the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
No
└─ toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model’s capabilities, like web search
or file search. Learn more about
built-in tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling.
No
└─ top_logprobsintegerAn integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.No
└─ top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No
└─ truncationenumThe truncation strategy to use for the model response.
- auto: If the context of this response and previous ones exceeds
the model’s context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
- disabled (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error.
Possible values: auto, disabled
No
└─ usageOpenAI.ResponseUsageRepresents token usage details including input tokens, output tokens,
a breakdown of output tokens, and the total tokens used.
No
└─ userstringLearn more about safety best practices.No
typeenumThe type of the event. Always ‘response.queued’.
Possible values: response.queued
Yes

OpenAI.ResponseReasoningDeltaEvent

Emitted when there is a delta (partial update) to the reasoning content.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the reasoning content part within the output item.Yes
deltaThe partial update to the reasoning content.Yes
item_idstringThe unique identifier of the item for which reasoning is being updated.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
typeenumThe type of the event. Always ‘response.reasoning.delta’.
Possible values: response.reasoning.delta
Yes

OpenAI.ResponseReasoningDoneEvent

Emitted when the reasoning content is finalized for an item.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the reasoning content part within the output item.Yes
item_idstringThe unique identifier of the item for which reasoning is finalized.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
textstringThe finalized reasoning text.Yes
typeenumThe type of the event. Always ‘response.reasoning.done’.
Possible values: response.reasoning.done
Yes

OpenAI.ResponseReasoningSummaryDeltaEvent

Emitted when there is a delta (partial update) to the reasoning summary content.
NameTypeDescriptionRequiredDefault
deltaThe partial update to the reasoning summary content.Yes
item_idstringThe unique identifier of the item for which the reasoning summary is being updated.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
summary_indexintegerThe index of the summary part within the output item.Yes
typeenumThe type of the event. Always ‘response.reasoning_summary.delta’.
Possible values: response.reasoning_summary.delta
Yes

OpenAI.ResponseReasoningSummaryDoneEvent

Emitted when the reasoning summary content is finalized for an item.
NameTypeDescriptionRequiredDefault
item_idstringThe unique identifier of the item for which the reasoning summary is finalized.Yes
output_indexintegerThe index of the output item in the response’s output array.Yes
summary_indexintegerThe index of the summary part within the output item.Yes
textstringThe finalized reasoning summary text.Yes
typeenumThe type of the event. Always ‘response.reasoning_summary.done’.
Possible values: response.reasoning_summary.done
Yes

OpenAI.ResponseReasoningSummaryPartAddedEvent

Emitted when a new reasoning summary part is added.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the item this summary part is associated with.Yes
output_indexintegerThe index of the output item this summary part is associated with.Yes
partobjectYes
└─ typeOpenAI.ReasoningItemSummaryPartTypeNo
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_part.added.
Possible values: response.reasoning_summary_part.added
Yes

OpenAI.ResponseReasoningSummaryPartDoneEvent

Emitted when a reasoning summary part is completed.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the item this summary part is associated with.Yes
output_indexintegerThe index of the output item this summary part is associated with.Yes
partobjectYes
└─ typeOpenAI.ReasoningItemSummaryPartTypeNo
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_part.done.
Possible values: response.reasoning_summary_part.done
Yes

OpenAI.ResponseReasoningSummaryTextDeltaEvent

Emitted when a delta is added to a reasoning summary text.
NameTypeDescriptionRequiredDefault
deltastringThe text delta that was added to the summary.Yes
item_idstringThe ID of the item this summary text delta is associated with.Yes
output_indexintegerThe index of the output item this summary text delta is associated with.Yes
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_text.delta.
Possible values: response.reasoning_summary_text.delta
Yes

OpenAI.ResponseReasoningSummaryTextDoneEvent

Emitted when a reasoning summary text is completed.
NameTypeDescriptionRequiredDefault
item_idstringThe ID of the item this summary text is associated with.Yes
output_indexintegerThe index of the output item this summary text is associated with.Yes
summary_indexintegerThe index of the summary part within the reasoning summary.Yes
textstringThe full text of the completed reasoning summary.Yes
typeenumThe type of the event. Always response.reasoning_summary_text.done.
Possible values: response.reasoning_summary_text.done
Yes

OpenAI.ResponseRefusalDeltaEvent

Emitted when there is a partial refusal text.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the refusal text is added to.Yes
deltastringThe refusal text that is added.Yes
item_idstringThe ID of the output item that the refusal text is added to.Yes
output_indexintegerThe index of the output item that the refusal text is added to.Yes
typeenumThe type of the event. Always response.refusal.delta.
Possible values: response.refusal.delta
Yes

OpenAI.ResponseRefusalDoneEvent

Emitted when refusal text is finalized.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the refusal text is finalized.Yes
item_idstringThe ID of the output item that the refusal text is finalized.Yes
output_indexintegerThe index of the output item that the refusal text is finalized.Yes
refusalstringThe refusal text that is finalized.Yes
typeenumThe type of the event. Always response.refusal.done.
Possible values: response.refusal.done
Yes

OpenAI.ResponseStreamEvent

Discriminator for OpenAI.ResponseStreamEvent

This component uses the property type to discriminate between different types:
Type ValueSchema
response.completedOpenAI.ResponseCompletedEvent
response.content_part.addedOpenAI.ResponseContentPartAddedEvent
response.content_part.doneOpenAI.ResponseContentPartDoneEvent
response.createdOpenAI.ResponseCreatedEvent
errorOpenAI.ResponseErrorEvent
response.file_search_call.completedOpenAI.ResponseFileSearchCallCompletedEvent
response.file_search_call.in_progressOpenAI.ResponseFileSearchCallInProgressEvent
response.file_search_call.searchingOpenAI.ResponseFileSearchCallSearchingEvent
response.function_call_arguments.deltaOpenAI.ResponseFunctionCallArgumentsDeltaEvent
response.function_call_arguments.doneOpenAI.ResponseFunctionCallArgumentsDoneEvent
response.in_progressOpenAI.ResponseInProgressEvent
response.failedOpenAI.ResponseFailedEvent
response.incompleteOpenAI.ResponseIncompleteEvent
response.output_item.addedOpenAI.ResponseOutputItemAddedEvent
response.output_item.doneOpenAI.ResponseOutputItemDoneEvent
response.refusal.deltaOpenAI.ResponseRefusalDeltaEvent
response.refusal.doneOpenAI.ResponseRefusalDoneEvent
response.output_text.deltaOpenAI.ResponseTextDeltaEvent
response.output_text.doneOpenAI.ResponseTextDoneEvent
response.reasoning_summary_part.addedOpenAI.ResponseReasoningSummaryPartAddedEvent
response.reasoning_summary_part.doneOpenAI.ResponseReasoningSummaryPartDoneEvent
response.reasoning_summary_text.deltaOpenAI.ResponseReasoningSummaryTextDeltaEvent
response.reasoning_summary_text.doneOpenAI.ResponseReasoningSummaryTextDoneEvent
response.web_search_call.completedOpenAI.ResponseWebSearchCallCompletedEvent
response.web_search_call.in_progressOpenAI.ResponseWebSearchCallInProgressEvent
response.web_search_call.searchingOpenAI.ResponseWebSearchCallSearchingEvent
response.image_generation_call.completedOpenAI.ResponseImageGenCallCompletedEvent
response.image_generation_call.generatingOpenAI.ResponseImageGenCallGeneratingEvent
response.image_generation_call.in_progressOpenAI.ResponseImageGenCallInProgressEvent
response.image_generation_call.partial_imageOpenAI.ResponseImageGenCallPartialImageEvent
response.mcp_call.arguments_deltaOpenAI.ResponseMCPCallArgumentsDeltaEvent
response.mcp_call.arguments_doneOpenAI.ResponseMCPCallArgumentsDoneEvent
response.mcp_call.completedOpenAI.ResponseMCPCallCompletedEvent
response.mcp_call.failedOpenAI.ResponseMCPCallFailedEvent
response.mcp_call.in_progressOpenAI.ResponseMCPCallInProgressEvent
response.mcp_list_tools.completedOpenAI.ResponseMCPListToolsCompletedEvent
response.mcp_list_tools.failedOpenAI.ResponseMCPListToolsFailedEvent
response.mcp_list_tools.in_progressOpenAI.ResponseMCPListToolsInProgressEvent
response.queuedOpenAI.ResponseQueuedEvent
response.reasoning.deltaOpenAI.ResponseReasoningDeltaEvent
response.reasoning.doneOpenAI.ResponseReasoningDoneEvent
response.reasoning_summary.deltaOpenAI.ResponseReasoningSummaryDeltaEvent
response.reasoning_summary.doneOpenAI.ResponseReasoningSummaryDoneEvent
response.code_interpreter_call_code.deltaOpenAI.ResponseCodeInterpreterCallCodeDeltaEvent
response.code_interpreter_call_code.doneOpenAI.ResponseCodeInterpreterCallCodeDoneEvent
response.code_interpreter_call.completedOpenAI.ResponseCodeInterpreterCallCompletedEvent
response.code_interpreter_call.in_progressOpenAI.ResponseCodeInterpreterCallInProgressEvent
response.code_interpreter_call.interpretingOpenAI.ResponseCodeInterpreterCallInterpretingEvent
NameTypeDescriptionRequiredDefault
sequence_numberintegerThe sequence number for this event.Yes
typeOpenAI.ResponseStreamEventTypeYes

OpenAI.ResponseStreamEventType

PropertyValue
Typestring
Valuesresponse.audio.delta
response.audio.done
response.audio_transcript.delta
response.audio_transcript.done
response.code_interpreter_call_code.delta
response.code_interpreter_call_code.done
response.code_interpreter_call.completed
response.code_interpreter_call.in_progress
response.code_interpreter_call.interpreting
response.completed
response.content_part.added
response.content_part.done
response.created
error
response.file_search_call.completed
response.file_search_call.in_progress
response.file_search_call.searching
response.function_call_arguments.delta
response.function_call_arguments.done
response.in_progress
response.failed
response.incomplete
response.output_item.added
response.output_item.done
response.refusal.delta
response.refusal.done
response.output_text.annotation.added
response.output_text.delta
response.output_text.done
response.reasoning_summary_part.added
response.reasoning_summary_part.done
response.reasoning_summary_text.delta
response.reasoning_summary_text.done
response.web_search_call.completed
response.web_search_call.in_progress
response.web_search_call.searching
response.image_generation_call.completed
response.image_generation_call.generating
response.image_generation_call.in_progress
response.image_generation_call.partial_image
response.mcp_call.arguments_delta
response.mcp_call.arguments_done
response.mcp_call.completed
response.mcp_call.failed
response.mcp_call.in_progress
response.mcp_list_tools.completed
response.mcp_list_tools.failed
response.mcp_list_tools.in_progress
response.queued
response.reasoning.delta
response.reasoning.done
response.reasoning_summary.delta
response.reasoning_summary.done

OpenAI.ResponseTextDeltaEvent

Emitted when there is an additional text delta.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the text delta was added to.Yes
deltastringThe text delta that was added.Yes
item_idstringThe ID of the output item that the text delta was added to.Yes
output_indexintegerThe index of the output item that the text delta was added to.Yes
typeenumThe type of the event. Always response.output_text.delta.
Possible values: response.output_text.delta
Yes

OpenAI.ResponseTextDoneEvent

Emitted when text content is finalized.
NameTypeDescriptionRequiredDefault
content_indexintegerThe index of the content part that the text content is finalized.Yes
item_idstringThe ID of the output item that the text content is finalized.Yes
output_indexintegerThe index of the output item that the text content is finalized.Yes
textstringThe text content that is finalized.Yes
typeenumThe type of the event. Always response.output_text.done.
Possible values: response.output_text.done
Yes

OpenAI.ResponseTextFormatConfiguration

Discriminator for OpenAI.ResponseTextFormatConfiguration

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ResponseTextFormatConfigurationTypeAn object specifying the format that the model must output.

Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the


The default format is { "type": "text" } with no additional options.

Not recommended for gpt-4o and newer models:

Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Yes

OpenAI.ResponseTextFormatConfigurationJsonObject

NameTypeDescriptionRequiredDefault
typeenum
Possible values: json_object
Yes

OpenAI.ResponseTextFormatConfigurationJsonSchema

JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
NameTypeDescriptionRequiredDefault
descriptionstringA description of what the response format is for, used by the model to
determine how to respond in the format.
No
namestringThe name of the response format. Must be a-z, A-Z, 0-9, or contain
underscores and dashes, with a maximum length of 64.
Yes
schemaobjectYes
strictbooleanWhether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide
.
NoFalse
typeenumThe type of response format being defined. Always json_schema.
Possible values: json_schema
Yes

OpenAI.ResponseTextFormatConfigurationText

NameTypeDescriptionRequiredDefault
typeenum
Possible values: text
Yes

OpenAI.ResponseTextFormatConfigurationType

An object specifying the format that the model must output. Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the The default format is { "type": "text" } with no additional options. Not recommended for gpt-4o and newer models: Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.
PropertyValue
DescriptionAn object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the The default format is { "type": "text" } with no additional options. Not recommended for gpt-4o and newer models: Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it. | | Type | string | | Values | text
json_schema
json_object |

OpenAI.ResponseUsage

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
NameTypeDescriptionRequiredDefault
input_tokensintegerThe number of input tokens.Yes
input_tokens_detailsobjectA detailed breakdown of the input tokens.Yes
└─ cached_tokensintegerThe number of tokens that were retrieved from the cache.
More on prompt caching.
No
output_tokensintegerThe number of output tokens.Yes
output_tokens_detailsobjectA detailed breakdown of the output tokens.Yes
└─ reasoning_tokensintegerThe number of reasoning tokens.No
total_tokensintegerThe total number of tokens used.Yes

OpenAI.ResponseWebSearchCallCompletedEvent

Note: web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
item_idstringUnique ID for the output item associated with the web search call.Yes
output_indexintegerThe index of the output item that the web search call is associated with.Yes
typeenumThe type of the event. Always response.web_search_call.completed.
Possible values: response.web_search_call.completed
Yes

OpenAI.ResponseWebSearchCallInProgressEvent

Note: web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
item_idstringUnique ID for the output item associated with the web search call.Yes
output_indexintegerThe index of the output item that the web search call is associated with.Yes
typeenumThe type of the event. Always response.web_search_call.in_progress.
Possible values: response.web_search_call.in_progress
Yes

OpenAI.ResponseWebSearchCallSearchingEvent

Note: web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
item_idstringUnique ID for the output item associated with the web search call.Yes
output_indexintegerThe index of the output item that the web search call is associated with.Yes
typeenumThe type of the event. Always response.web_search_call.searching.
Possible values: response.web_search_call.searching
Yes

OpenAI.ResponsesAssistantMessageItemParam

A message parameter item with the assistant role.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
roleenumThe role of the message, which is always assistant.
Possible values: assistant
Yes

OpenAI.ResponsesAssistantMessageItemResource

A message resource item with the assistant role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always assistant.
Possible values: assistant
Yes

OpenAI.ResponsesDeveloperMessageItemParam

A message parameter item with the developer role.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
roleenumThe role of the message, which is always developer.
Possible values: developer
Yes

OpenAI.ResponsesDeveloperMessageItemResource

A message resource item with the developer role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always developer.
Possible values: developer
Yes

OpenAI.ResponsesMessageItemParam

A response message item, representing a role and content, as provided as client request parameters.

Discriminator for OpenAI.ResponsesMessageItemParam

This component uses the property role to discriminate between different types:
NameTypeDescriptionRequiredDefault
roleobjectThe collection of valid roles for responses message items.Yes
typeenumThe type of the responses item, which is always ‘message’.
Possible values: message
Yes

OpenAI.ResponsesMessageItemResource

A response message resource item, representing a role and content, as provided on service responses.

Discriminator for OpenAI.ResponsesMessageItemResource

This component uses the property role to discriminate between different types:
NameTypeDescriptionRequiredDefault
roleobjectThe collection of valid roles for responses message items.Yes
statusenumThe status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
Possible values: in_progress, completed, incomplete
Yes
typeenumThe type of the responses item, which is always ‘message’.
Possible values: message
Yes

OpenAI.ResponsesMessageRole

The collection of valid roles for responses message items.
PropertyValue
DescriptionThe collection of valid roles for responses message items.
Typestring
Valuessystem
developer
user
assistant

OpenAI.ResponsesSystemMessageItemParam

A message parameter item with the system role.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
roleenumThe role of the message, which is always system.
Possible values: system
Yes

OpenAI.ResponsesSystemMessageItemResource

A message resource item with the system role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always system.
Possible values: system
Yes

OpenAI.ResponsesUserMessageItemParam

A message parameter item with the user role.
NameTypeDescriptionRequiredDefault
contentstring or arrayYes
roleenumThe role of the message, which is always user.
Possible values: user
Yes

OpenAI.ResponsesUserMessageItemResource

A message resource item with the user role.
NameTypeDescriptionRequiredDefault
contentarrayThe content associated with the message.Yes
roleenumThe role of the message, which is always user.
Possible values: user
Yes

OpenAI.ServiceTier

Specifies the processing type used for serving the request.
  • If set to ‘auto’, then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
  • If set to ‘default’, then the request will be processed with the standard pricing and performance for the selected model.
  • If set to ‘flex’ or ‘priority’, then the request will be processed with the corresponding service tier. Contact sales to learn more about Priority processing.
  • When not set, the default behavior is ‘auto’.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
PropertyValue
DescriptionSpecifies the processing type used for serving the request.
* If set to ‘auto’, then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project will use ‘default’.
* If set to ‘default’, then the request will be processed with the standard
pricing and performance for the selected model.
* If set to ‘flex
or ‘priority’, then the request will be processed with the corresponding service
tier. Contact sales to learn more about Priority processing.
* When not set, the default behavior is ‘auto’.

When the service_tier parameter is set, the response body will include the service_tier
value based on the processing mode actually used to serve the request. This response value
may be different from the value set in the parameter.
Typestring
Valuesauto
default
flex
scale
priority

OpenAI.TextResponseFormatConfiguration

An object specifying the format that the model must output. Configuring { "type": "json_schema" } enables Structured Outputs, which ensures the model will match your supplied JSON schema. Learn more in the The default format is { "type": "text" } with no additional options. Not recommended for gpt-4o and newer models:* Setting to { "type": "json_object" } enables the older JSON mode, which ensures the message the model generates is valid JSON. Using json_schema is preferred for models that support it.

Discriminator for OpenAI.TextResponseFormatConfiguration

This component uses the property type to discriminate between different types:
Type ValueSchema
NameTypeDescriptionRequiredDefault
typestringYes

OpenAI.Tool

Discriminator for OpenAI.Tool

This component uses the property type to discriminate between different types:
Type ValueSchema
functionOpenAI.FunctionTool
file_searchOpenAI.FileSearchTool
computer_use_previewOpenAI.ComputerUsePreviewTool
web_search_previewOpenAI.WebSearchPreviewTool
code_interpreterOpenAI.CodeInterpreterTool
image_generationOpenAI.ImageGenTool
local_shellOpenAI.LocalShellTool
mcpOpenAI.MCPTool
bing_groundingBingGroundingAgentTool
fabric_dataagent_previewMicrosoftFabricAgentTool
sharepoint_grounding_previewSharepointAgentTool
azure_ai_searchAzureAISearchAgentTool
openapiOpenApiAgentTool
bing_custom_search_previewBingCustomSearchAgentTool
browser_automation_previewBrowserAutomationAgentTool
azure_functionAzureFunctionAgentTool
capture_structured_outputsCaptureStructuredOutputsTool
a2a_previewA2ATool
memory_searchMemorySearchTool
NameTypeDescriptionRequiredDefault
typeOpenAI.ToolTypeA tool that can be used to generate a response.Yes

OpenAI.ToolChoiceObject

Discriminator for OpenAI.ToolChoiceObject

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.ToolChoiceObjectTypeIndicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
Yes

OpenAI.ToolChoiceObjectCodeInterpreter

NameTypeDescriptionRequiredDefault
typeenum
Possible values: code_interpreter
Yes

OpenAI.ToolChoiceObjectComputer

NameTypeDescriptionRequiredDefault
typeenum
Possible values: computer_use_preview
Yes

OpenAI.ToolChoiceObjectFileSearch

NameTypeDescriptionRequiredDefault
typeenum
Possible values: file_search
Yes

OpenAI.ToolChoiceObjectFunction

Use this option to force the model to call a specific function.
NameTypeDescriptionRequiredDefault
namestringThe name of the function to call.Yes
typeenumFor function calling, the type is always function.
Possible values: function
Yes

OpenAI.ToolChoiceObjectImageGen

NameTypeDescriptionRequiredDefault
typeenum
Possible values: image_generation
Yes

OpenAI.ToolChoiceObjectMCP

Use this option to force the model to call a specific tool on a remote MCP server.
NameTypeDescriptionRequiredDefault
namestringThe name of the tool to call on the server.No
server_labelstringThe label of the MCP server to use.Yes
typeenumFor MCP tools, the type is always mcp.
Possible values: mcp
Yes

OpenAI.ToolChoiceObjectType

Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
PropertyValue
DescriptionIndicates that the model should use a built-in tool to generate a response.
Learn more about built-in tools.
Typestring
Valuesfile_search
function
computer_use_preview
web_search_preview
image_generation
code_interpreter
mcp

OpenAI.ToolChoiceObjectWebSearch

Note: web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
typeenum
Possible values: web_search_preview
Yes

OpenAI.ToolChoiceOptions

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
PropertyValue
DescriptionControls which (if any) tool is called by the model.

none means the model will not call any tool and instead generates a message.

auto means the model can pick between generating a message or calling one or
more tools.

required means the model must call one or more tools.
Typestring
Valuesnone
auto
required

OpenAI.ToolType

A tool that can be used to generate a response.
PropertyValue
DescriptionA tool that can be used to generate a response.
Typestring
Valuesfile_search
function
computer_use_preview
web_search_preview
mcp
code_interpreter
image_generation
local_shell
bing_grounding
browser_automation_preview
fabric_dataagent_preview
sharepoint_grounding_preview
azure_ai_search
openapi
bing_custom_search_preview
capture_structured_outputs
a2a_preview
azure_function
memory_search

OpenAI.TopLogProb

The top log probability of a token.
NameTypeDescriptionRequiredDefault
bytesarrayYes
logprobnumberYes
tokenstringYes

OpenAI.UpdateConversationRequest

Update a conversation
NameTypeDescriptionRequiredDefault
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

OpenAI.VectorStoreFileAttributes

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers. Type: object

OpenAI.WebSearchAction

Discriminator for OpenAI.WebSearchAction

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeOpenAI.WebSearchActionTypeYes

OpenAI.WebSearchActionFind

Action type “find”: Searches for a pattern within a loaded page.
NameTypeDescriptionRequiredDefault
patternstringThe pattern or text to search for within the page.Yes
typeenumThe action type.
Possible values: find
Yes
urlstringThe URL of the page searched for the pattern.Yes

OpenAI.WebSearchActionOpenPage

Action type “open_page” - Opens a specific URL from search results.
NameTypeDescriptionRequiredDefault
typeenumThe action type.
Possible values: open_page
Yes
urlstringThe URL opened by the model.Yes

OpenAI.WebSearchActionSearch

Action type “search” - Performs a web search query.
NameTypeDescriptionRequiredDefault
querystringThe search query.Yes
sourcesarrayThe sources used in the search.No
typeenumThe action type.
Possible values: search
Yes

OpenAI.WebSearchActionSearchSources

NameTypeDescriptionRequiredDefault
typeenum
Possible values: url
Yes
urlstringYes

OpenAI.WebSearchActionType

PropertyValue
Typestring
Valuessearch
open_page
find

OpenAI.WebSearchPreviewTool

Note: web_search is not yet available via Azure OpenAI.
NameTypeDescriptionRequiredDefault
search_context_sizeenumHigh level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
Possible values: low, medium, high
No
typeenumThe type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
Possible values: web_search_preview
Yes
user_locationobjectNo
└─ typeOpenAI.LocationTypeNo

OpenAI.WebSearchToolCallItemParam

The results of a web search tool call. See the web search guide for more information.
NameTypeDescriptionRequiredDefault
actionobjectYes
└─ typeOpenAI.WebSearchActionTypeNo
typeenum
Possible values: web_search_call
Yes

OpenAI.WebSearchToolCallItemResource

The results of a web search tool call. See the web search guide for more information.
NameTypeDescriptionRequiredDefault
actionobjectYes
└─ typeOpenAI.WebSearchActionTypeNo
statusenumThe status of the web search tool call.
Possible values: in_progress, searching, completed, failed
Yes
typeenum
Possible values: web_search_call
Yes

OpenAI.integer

Type: integer Format: int64

OpenAI.numeric

Type: number Format: double

OpenApiAgentTool

The input definition information for an OpenAPI tool as used to configure an agent.
NameTypeDescriptionRequiredDefault
openapiobjectThe input definition information for an openapi function.Yes
└─ authOpenApiAuthDetailsOpen API authentication detailsNo
└─ default_paramsarrayList of OpenAPI spec parameters that will use user-provided defaultsNo
└─ descriptionstringA description of what the function does, used by the model to choose when and how to call the function.No
└─ functionsarrayList of function definitions used by OpenApi toolNo
└─ namestringThe name of the function to be called.No
└─ specThe openapi function shape, described as a JSON Schema object.No
typeenumThe object type, which is always ‘openapi’.
Possible values: openapi
Yes

OpenApiAnonymousAuthDetails

Security details for OpenApi anonymous authentication
NameTypeDescriptionRequiredDefault
typeenumThe object type, which is always ‘anonymous’.
Possible values: anonymous
Yes

OpenApiAuthDetails

authentication details for OpenApiFunctionDefinition

Discriminator for OpenApiAuthDetails

This component uses the property type to discriminate between different types:
Type ValueSchema
anonymousOpenApiAnonymousAuthDetails
project_connectionOpenApiProjectConnectionAuthDetails
managed_identityOpenApiManagedAuthDetails
NameTypeDescriptionRequiredDefault
typeobjectAuthentication type for OpenApi endpoint. Allowed types are:
- Anonymous (no authentication required)
- Project Connection (requires project_connection_id to endpoint, as setup in Foundry)
- Managed_Identity (requires audience for identity based auth)
Yes

OpenApiAuthType

Authentication type for OpenApi endpoint. Allowed types are:
  • Anonymous (no authentication required)
  • Project Connection (requires project_connection_id to endpoint, as setup in Foundry)
  • Managed_Identity (requires audience for identity based auth)
PropertyValue
Typestring
Valuesanonymous
project_connection
managed_identity

OpenApiFunctionDefinition

The input definition information for an openapi function.
NameTypeDescriptionRequiredDefault
authobjectauthentication details for OpenApiFunctionDefinitionYes
└─ typeOpenApiAuthTypeThe type of authentication, must be anonymous/project_connection/managed_identityNo
default_paramsarrayList of OpenAPI spec parameters that will use user-provided defaultsNo
descriptionstringA description of what the function does, used by the model to choose when and how to call the function.No
functionsarrayList of function definitions used by OpenApi toolNo
namestringThe name of the function to be called.Yes
specThe openapi function shape, described as a JSON Schema object.Yes

OpenApiManagedAuthDetails

Security details for OpenApi managed_identity authentication
NameTypeDescriptionRequiredDefault
security_schemeobjectSecurity scheme for OpenApi managed_identity authenticationYes
└─ audiencestringAuthentication scope for managed_identity auth typeNo
typeenumThe object type, which is always ‘managed_identity’.
Possible values: managed_identity
Yes

OpenApiManagedSecurityScheme

Security scheme for OpenApi managed_identity authentication
NameTypeDescriptionRequiredDefault
audiencestringAuthentication scope for managed_identity auth typeYes

OpenApiProjectConnectionAuthDetails

Security details for OpenApi project connection authentication
NameTypeDescriptionRequiredDefault
security_schemeobjectSecurity scheme for OpenApi managed_identity authenticationYes
└─ project_connection_idstringProject connection id for Project Connection auth typeNo
typeenumThe object type, which is always ‘project_connection’.
Possible values: project_connection
Yes

OpenApiProjectConnectionSecurityScheme

Security scheme for OpenApi managed_identity authentication
NameTypeDescriptionRequiredDefault
project_connection_idstringProject connection id for Project Connection auth typeYes

PagedConnection

Paged collection of Connection items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe Connection items on this pageYes

PagedDatasetVersion

Paged collection of DatasetVersion items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe DatasetVersion items on this pageYes

PagedDeployment

Paged collection of Deployment items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe Deployment items on this pageYes

PagedEvaluationRule

Paged collection of EvaluationRule items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe EvaluationRule items on this pageYes

PagedEvaluationTaxonomy

Paged collection of EvaluationTaxonomy items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe EvaluationTaxonomy items on this pageYes

PagedEvaluatorVersion

Paged collection of EvaluatorVersion items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe EvaluatorVersion items on this pageYes

PagedIndex

Paged collection of Index items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe Index items on this pageYes

PagedInsight

Paged collection of Insight items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe Insight items on this pageYes

PagedRedTeam

Paged collection of RedTeam items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe RedTeam items on this pageYes

PagedSchedule

Paged collection of Schedule items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe Schedule items on this pageYes

PagedScheduleRun

Paged collection of ScheduleRun items
NameTypeDescriptionRequiredDefault
nextLinkstringThe link to the next page of itemsNo
valuearrayThe ScheduleRun items on this pageYes

PendingUploadRequest

Represents a request for a pending upload.
NameTypeDescriptionRequiredDefault
connectionNamestringAzure Storage Account connection name to use for generating temporary SAS tokenNo
pendingUploadIdstringIf PendingUploadId is not provided, a random GUID will be used.No
pendingUploadTypeenumBlobReference is the only supported type.
Possible values: BlobReference
Yes

PendingUploadResponse

Represents the response for a pending upload request
NameTypeDescriptionRequiredDefault
blobReferenceobjectBlob reference details.Yes
└─ blobUristringBlob URI path for client to upload data. Example: https://blob.windows.core.net/Container/PathNo
└─ credentialSasCredentialCredential info to access the storage account.No
└─ storageAccountArmIdstringARM ID of the storage account to use.No
pendingUploadIdstringID for this upload request.Yes
pendingUploadTypeenumBlobReference is the only supported type
Possible values: BlobReference
Yes
versionstringVersion of asset to be created if user did not specify version when initially creating uploadNo

PromptAgentDefinition

The prompt agent definition
NameTypeDescriptionRequiredDefault
instructionsstringA system (or developer) message inserted into the model’s context.No
kindenum
Possible values: prompt
Yes
modelstringThe model deployment to use for this agent.Yes
reasoningobjecto-series models only

Configuration options for reasoning models.
No
└─ effortOpenAI.ReasoningEffortConstrains effort on reasoning for reasoning models.

Currently supported values are none, minimal, low, medium, and high.

Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

gpt-5.1 defaults to none, which does not perform reasoning. The supported reasoning values for gpt-5.1 are none, low, medium, and high. Tool calls are supported for all reasoning values in gpt-5.1.

All models before gpt-5.1 default to medium reasoning effort, and do not support none.

The gpt-5-pro model defaults to (and only supports) high reasoning effort.
No
└─ generate_summaryenumDeprecated: use summary instead. A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model’s reasoning process. One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
└─ summaryenumA summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model’s reasoning process.
One of auto, concise, or detailed.
Possible values: auto, concise, detailed
No
structured_inputsobjectSet of structured inputs that can participate in prompt template substitution or tool argument bindings.No
temperaturenumberWhat sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
No1
textobjectConfiguration options for a text response from the model. Can be plain text or structured JSON data.No
└─ formatOpenAI.ResponseTextFormatConfigurationNo
toolsarrayAn array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
No
top_pnumberAn alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or temperature but not both.
No1

PromptBasedEvaluatorDefinition

Prompt-based evaluator
NameTypeDescriptionRequiredDefault
prompt_textstringThe prompt text used for evaluationYes
typeenum
Possible values: prompt
Yes

ProtocolVersionRecord

A record mapping for a single protocol and its version.
NameTypeDescriptionRequiredDefault
protocolobjectYes
versionstringThe version string for the protocol, e.g. ‘v0.1.1’.Yes

RaiConfig

Configuration for Responsible AI (RAI) content filtering and safety features.
NameTypeDescriptionRequiredDefault
rai_policy_namestringThe name of the RAI policy to apply.Yes

RecurrenceSchedule

Recurrence schedule model.

Discriminator for RecurrenceSchedule

This component uses the property type to discriminate between different types:
NameTypeDescriptionRequiredDefault
typeobjectRecurrence type.Yes

RecurrenceTrigger

Recurrence based trigger.
NameTypeDescriptionRequiredDefault
endTimestringEnd time for the recurrence schedule in ISO 8601 format.No
intervalintegerInterval for the recurrence schedule.Yes
scheduleobjectRecurrence schedule model.Yes
└─ typeRecurrenceTypeRecurrence type for the recurrence schedule.No
startTimestringStart time for the recurrence schedule in ISO 8601 format.No
timeZonestringTime zone for the recurrence schedule.NoUTC
typeenumType of the trigger.
Possible values: Recurrence
Yes

RecurrenceType

Recurrence type.
PropertyValue
DescriptionRecurrence type.
Typestring
ValuesHourly
Daily
Weekly
Monthly

RedTeam

Red team details.
NameTypeDescriptionRequiredDefault
applicationScenariostringApplication scenario for the red team operation, to generate scenario specific attacks.No
attackStrategiesarrayList of attack strategies or nested lists of attack strategies.No
displayNamestringName of the red-team run.No
idstringIdentifier of the red team run.Yes
numTurnsintegerNumber of simulation rounds.No
propertiesobjectRed team’s properties. Unlike tags, properties are add-only. Once added, a property cannot be removed.No
riskCategoriesarrayList of risk categories to generate attack objectives for.No
simulationOnlybooleanSimulation-only or Simulation + Evaluation. Default false, if true the scan outputs conversation not evaluation result.NoFalse
statusstringStatus of the red-team. It is set by service and is read-only.No
tagsobjectRed team’s tags. Unlike properties, tags are fully mutable.No
targetobjectAbstract class for target configuration.Yes
└─ typestringType of the model configuration.No

RedTeamItemGenerationParams

Represents the parameters for red team item generation.
NameTypeDescriptionRequiredDefault
attack_strategiesarrayThe collection of attack strategies to be used.Yes
num_turnsintegerThe number of turns allowed in the game.Yes
typeenumThe type of item generation parameters, always red_team.
Possible values: red_team
Yes

RiskCategory

Risk category for the attack objective.
PropertyValue
DescriptionRisk category for the attack objective.
Typestring
ValuesHateUnfairness
Violence
Sexual
SelfHarm
ProtectedMaterial
CodeVulnerability
UngroundedAttributes
ProhibitedActions
SensitiveDataLeakage
TaskAdherence

SASCredentials

Shared Access Signature (SAS) credential definition
NameTypeDescriptionRequiredDefault
SASstringSAS tokenNo
typeenumThe credential type
Possible values: SAS
Yes

SampleType

The type of sample used in the analysis.
PropertyValue
Typestring
ValuesEvaluationResultSample

SasCredential

SAS Credential definition
NameTypeDescriptionRequiredDefault
sasUristringSAS uriYes
typeenumType of credential
Possible values: SAS
Yes

Schedule

Schedule model.
NameTypeDescriptionRequiredDefault
descriptionstringDescription of the schedule.No
displayNamestringName of the schedule.No
enabledbooleanEnabled status of the schedule.Yes
idstringIdentifier of the schedule.Yes
propertiesobjectSchedule’s properties. Unlike tags, properties are add-only. Once added, a property cannot be removed.No
provisioningStatusobjectSchedule provisioning status.No
systemDataobjectSystem metadata for the resource.Yes
tagsobjectSchedule’s tags. Unlike properties, tags are fully mutable.No
taskobjectSchedule task model.Yes
└─ configurationobjectConfiguration for the task.No
└─ typeScheduleTaskTypeType of the task.No
triggerobjectBase model for Trigger of the schedule.Yes
└─ typeTriggerTypeType of the trigger.No

ScheduleProvisioningStatus

Schedule provisioning status.
PropertyValue
DescriptionSchedule provisioning status.
Typestring
ValuesCreating
Updating
Deleting
Succeeded
Failed

ScheduleRun

Schedule run model.
NameTypeDescriptionRequiredDefault
errorstringError information for the schedule run.No
idstringIdentifier of the schedule run.Yes
propertiesobjectProperties of the schedule run.Yes
scheduleIdstringIdentifier of the schedule.Yes
successbooleanTrigger success status of the schedule run.Yes
triggerTimestringTrigger time of the schedule run.No

ScheduleTask

Schedule task model.

Discriminator for ScheduleTask

This component uses the property type to discriminate between different types:
Type ValueSchema
EvaluationEvaluationScheduleTask
InsightInsightScheduleTask
NameTypeDescriptionRequiredDefault
configurationobjectConfiguration for the task.No
typeobjectType of the task.Yes

ScheduleTaskType

Type of the task.
PropertyValue
DescriptionType of the task.
Typestring
ValuesEvaluation
Insight

SeedPromptsRedTeamItemGenerationParams

Represents the parameters for red team item generation with seed prompts.
NameTypeDescriptionRequiredDefault
attack_strategiesarrayThe collection of attack strategies to be used.Yes
num_turnsintegerThe number of turns allowed in the game.Yes
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ idstringThe identifier of the file.No
└─ typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
No
typeenumThe type of item generation parameters, always red_team.
Possible values: red_team_seed_prompts
Yes

SharepointAgentTool

The input definition information for a sharepoint tool as used to configure an agent.
NameTypeDescriptionRequiredDefault
sharepoint_grounding_previewobjectThe sharepoint grounding tool parameters.Yes
└─ project_connectionsarrayThe project connections attached to this tool. There can be a maximum of 1 connection
resource attached to the tool.
No
typeenumThe object type, which is always ‘sharepoint_grounding’.
Possible values: sharepoint_grounding_preview
Yes

SharepointGroundingToolParameters

The sharepoint grounding tool parameters.
NameTypeDescriptionRequiredDefault
project_connectionsarrayThe project connections attached to this tool. There can be a maximum of 1 connection
resource attached to the tool.
No

Sku

Sku information
NameTypeDescriptionRequiredDefault
capacityintegerSku capacityYes
familystringSku familyYes
namestringSku nameYes
sizestringSku sizeYes
tierstringSku tierYes

StructuredInputDefinition

An structured input that can participate in prompt template substitutions and tool argument binding.
NameTypeDescriptionRequiredDefault
default_valueThe default value for the input if no run-time value is provided.No
descriptionstringA human-readable description of the input.No
requiredbooleanWhether the input property is required when the agent is invoked.NoFalse
schemaThe JSON schema for the structured input (optional).No

StructuredOutputDefinition

A structured output that can be produced by the agent.
NameTypeDescriptionRequiredDefault
descriptionstringA description of the output to emit. Used by the model to determine when to emit the output.Yes
namestringThe name of the structured output.Yes
schemaThe JSON schema for the structured output.Yes
strictbooleanWhether to enforce strict validation. Default true.Yes

StructuredOutputsItemResource

NameTypeDescriptionRequiredDefault
outputThe structured output captured during the response.Yes
typeenum
Possible values: structured_outputs
Yes

Target

Base class for targets with discriminator support.

Discriminator for Target

This component uses the property type to discriminate between different types:
Type ValueSchema
azure_ai_modelAzureAIModelTarget
azure_ai_agentAzureAIAgentTarget
azure_ai_assistantAzureAIAssistantTarget
NameTypeDescriptionRequiredDefault
typestringThe type of target.Yes

TargetCompletions

Represents a data source for target-based completion evaluation configuration.
NameTypeDescriptionRequiredDefault
input_messagesobjectNo
└─ item_referencestringNo
└─ typeenum
Possible values: item_reference
No
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ idstringThe identifier of the file.No
└─ typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
No
targetobjectBase class for targets with discriminator support.Yes
└─ typestringThe type of target.No
typeenumThe type of data source, always TargetCompletions.
Possible values: azure_ai_target_completions
Yes

TargetConfig

Abstract class for target configuration.

Discriminator for TargetConfig

This component uses the property type to discriminate between different types:
Type ValueSchema
AzureOpenAIModelAzureOpenAIModelConfiguration
NameTypeDescriptionRequiredDefault
typestringType of the model configuration.Yes

TargetUpdate

Base class for targets with discriminator support.

Discriminator for TargetUpdate

This component uses the property type to discriminate between different types:
Type ValueSchema
azure_ai_modelAzureAIModelTargetUpdate
azure_ai_assistantAzureAIAssistantTargetUpdate
NameTypeDescriptionRequiredDefault
typestringThe type of target.Yes

TaxonomyCategory

Taxonomy category definition.
NameTypeDescriptionRequiredDefault
descriptionstringDescription of the taxonomy category.No
idstringUnique identifier of the taxonomy category.Yes
namestringName of the taxonomy category.Yes
propertiesobjectAdditional properties for the taxonomy category.No
riskCategoryobjectRisk category for the attack objective.Yes
subCategoriesarrayList of taxonomy sub categories.Yes

TaxonomyRedTeamItemGenerationParams

Represents the parameters for red team item generation with seed prompts.
NameTypeDescriptionRequiredDefault
attack_strategiesarrayThe collection of attack strategies to be used.Yes
num_turnsintegerThe number of turns allowed in the game.Yes
sourceobjectYes
└─ contentarrayThe content of the jsonl file.No
└─ idstringThe identifier of the file.No
└─ typeenumThe type of jsonl source. Always file_id.
Possible values: file_id
No
typeenumThe type of item generation parameters, always red_team.
Possible values: red_team_taxonomy
Yes

TaxonomySubCategory

Taxonomy sub-category definition.
NameTypeDescriptionRequiredDefault
descriptionstringDescription of the taxonomy sub-category.No
enabledbooleanList of taxonomy items under this sub-category.Yes
idstringUnique identifier of the taxonomy sub-category.Yes
namestringName of the taxonomy sub-category.Yes
propertiesobjectAdditional properties for the taxonomy sub-category.No

ToolDescription

Description of a tool that can be used by an agent.
NameTypeDescriptionRequiredDefault
descriptionstringA brief description of the tool’s purpose.No
namestringThe name of the tool.No

ToolProjectConnection

A project connection resource.
NameTypeDescriptionRequiredDefault
project_connection_idstringA project connection in a ToolProjectConnectionList attached to this tool.Yes

TracesEvalRunDataSource

Represents a data source for evaluation runs that operate over Agent traces stored in Application Insights.
NameTypeDescriptionRequiredDefault
lookback_hoursintegerLookback window (in hours) applied when retrieving traces from Application Insights.No168
trace_idsarrayCollection of Agent trace identifiers that should be evaluated.Yes
typeenumThe type of data source, always azure_ai_traces.
Possible values: azure_ai_traces
Yes

TreatmentEffectType

Treatment Effect Type.
PropertyValue
Typestring
ValuesTooFewSamples
Inconclusive
Changed
Improved
Degraded

Trigger

Base model for Trigger of the schedule.

Discriminator for Trigger

This component uses the property type to discriminate between different types:
Type ValueSchema
CronCronTrigger
RecurrenceRecurrenceTrigger
OneTimeOneTimeTrigger
NameTypeDescriptionRequiredDefault
typeobjectType of the trigger.Yes

TriggerType

Type of the trigger.
PropertyValue
DescriptionType of the trigger.
Typestring
ValuesCron
Recurrence
OneTime

UpdateAgentFromManifestRequest

NameTypeDescriptionRequiredDefault
descriptionstringA human-readable description of the agent.No
manifest_idstringThe manifest ID to import the agent version from.Yes
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
parameter_valuesobjectThe inputs to the manifest that will result in a fully materialized Agent.Yes

UpdateAgentRequest

NameTypeDescriptionRequiredDefault
definitionobjectYes
└─ kindAgentKindNo
└─ rai_configRaiConfigConfiguration for Responsible AI (RAI) content filtering and safety features.No
descriptionstringA human-readable description of the agent.No
metadataobjectSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No

UpdateEvalParametersBody

NameTypeDescriptionRequiredDefault
metadataOpenAI.MetadataSet of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters.
No
namestringNo
propertiesobjectSet of immutable 16 key-value pairs that can be attached to an object for storing additional information.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
No

UserProfileMemoryItem

A memory item specifically containing user profile information extracted from conversations, such as preferences, interests, and personal details.
NameTypeDescriptionRequiredDefault
kindenumThe kind of the memory item.
Possible values: user_profile
Yes

WeeklyRecurrenceSchedule

Weekly recurrence schedule.
NameTypeDescriptionRequiredDefault
daysOfWeekarrayDays of the week for the recurrence schedule.Yes
typeenumWeekly recurrence type.
Possible values: Weekly
Yes

WorkflowActionOutputItemResource

NameTypeDescriptionRequiredDefault
action_idstringUnique identifier for the action.Yes
kindstringThe kind of CSDL action (e.g., ‘SetVariable’, ‘InvokeAzureAgent’).Yes
parent_action_idstringID of the parent action if this is a nested action.No
previous_action_idstringID of the previous action if this action follows another.No
statusenumStatus of the action (e.g., ‘in_progress’, ‘completed’, ‘failed’, ‘cancelled’).
Possible values: completed, failed, in_progress, cancelled
Yes
typeenum
Possible values: workflow_action
Yes

WorkflowAgentDefinition

The workflow agent definition.
NameTypeDescriptionRequiredDefault
kindenum
Possible values: workflow
Yes
workflowstringThe CSDL YAML definition of the workflow.No

integer

Type: integer Format: int64