Monitor agents with the Agent Monitoring Dashboard (preview)
Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don’t recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Prerequisites
- A Foundry project with at least one agent.
- An Application Insights resource connected to your project.
- Azure role-based access control (RBAC) access to the Application Insights resource. For log-based views, you also need access to the associated Log Analytics workspace. To verify access, open the Application Insights resource in the Azure portal, select Access control (IAM), and confirm your account has an appropriate role. For log access, assign the Log Analytics Reader role.
Connect Application Insights
The Agent Monitoring Dashboard reads telemetry from the Application Insights resource connected to your Foundry project. If you haven’t connected Application Insights yet, follow the tracing setup steps and then return to this article.View agent metrics
To view metrics for an agent in the Foundry portal:- Sign in to Microsoft Foundry. Make sure the New Foundry toggle is on. These steps refer to Foundry (new).

- Navigate to the Build page using the top navigation and select the agent you’d like to view data for.
- Select the Monitor tab to view operational, evaluation, and red-teaming data for your agent.

- Summary cards at the top for high-level metrics.
- Charts and graphs below for granular details. These visualizations reflect data for the selected time range.
Understand the dashboard metrics
Use these definitions to interpret the dashboard:- Token usage: Token counts for agent traffic in the selected time range. High token usage might indicate verbose prompts or responses that could benefit from optimization.
- Latency: Response time for agent runs. Latency above 10 seconds might indicate model throttling, complex tool calls, or network issues.
- Run success rate: The percentage of runs that complete successfully. A rate below 95% warrants investigation into failed runs.
- Evaluation metrics: Scores produced by evaluators that run on sampled agent outputs. Scores vary by evaluator; review individual evaluator documentation for interpretation guidance.
- Red teaming results: Outcomes from scheduled red team scans, if enabled. Failed scans indicate potential security risks that require remediation.
Monitoring data is stored in the connected Application Insights resource. Retention and billing follow your Application Insights configuration.
Configure settings
Use the Monitor settings panel to configure telemetry, evaluations, and security checks for your agents. These settings control which charts the dashboard shows and which evaluations run.
| Setting | Purpose | Configuration Options |
|---|---|---|
| Continuous evaluation | Runs evaluations on sampled agent responses. | Enable or disable Add evaluators Set the sample rate |
| Scheduled evaluations | Runs evaluations on a schedule to validate performance against benchmarks. | Enable or disable Select an evaluation template and run Set a schedule |
| Red team scans | Runs adversarial tests to detect risks such as data leakage or prohibited actions. | Enable or disable Select an evaluation template and run Set a schedule |
| Alerts | Detects performance anomalies, evaluation failures, and security risks. | Configure alerts for latency, token usage, evaluation scores, or red team findings |
Set up continuous evaluation (Python SDK)
Use the Python SDK to set up continuous evaluation rules for agent responses. This section requires Python 3.9 or later.AZURE_AI_PROJECT_ENDPOINT: The Foundry project endpoint, as found on the project overview page in the Foundry portal.AZURE_AI_AGENT_NAME: The name of the agent to use for evaluation.AZURE_AI_MODEL_DEPLOYMENT_NAME: The deployment name of the model.
Assign permissions for continuous evaluation
To enable continuous evaluation rules, assign the project managed identity the Azure AI User role.- In the Azure portal, open the resource for your Foundry project.
- Select Access control (IAM), and then select Add.
- Create a role assignment for Azure AI User.
- For the member, select your Foundry project’s managed identity.
Create an agent
Create a continuous evaluation rule
Define the evaluation and the rule that runs when a response completes. To learn more about supported evaluators, see Built in evaluators.Verify continuous evaluation results
- Generate agent traffic (for example, run your app or test the agent in the portal).
- In the Foundry portal, open the agent and select Monitor.
- Review evaluation-related charts for the selected time range.
Full sample code
To view the full sample code, see:Troubleshooting
| Issue | Cause | Resolution |
|---|---|---|
| Dashboard charts are empty | No recent traffic, time range excludes data, or ingestion delay | Generate new agent traffic, expand the time range, and refresh after a few minutes. |
| You see authorization errors | Missing RBAC permissions on Application Insights or Log Analytics | Confirm access in Access control (IAM) for the connected resources. For log access, assign the Log Analytics Reader role. |
| Continuous evaluation results don’t appear | Continuous evaluation isn’t enabled or rule creation failed | Confirm that your rule is enabled and that agent traffic is flowing. If you use the Python SDK setup, confirm the project managed identity has the Azure AI User role. |
| Evaluation runs are skipped | Hourly run limit reached | Increase max_hourly_runs in the evaluation rule configuration or wait for the next hour. The default limit is 100 runs per hour. |