Configure tracing for AI agent frameworks (preview)
Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don’t recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
- Configure automatic tracing for Microsoft Agent Framework and Semantic Kernel
- Set up the
langchain-azure-aitracer for LangChain and LangGraph - Instrument the OpenAI Agents SDK with OpenTelemetry
- Verify that traces appear in the Foundry portal
- Troubleshoot common tracing issues
Prerequisites
- A Foundry project with tracing connected to Application Insights.
- Contributor or higher role on the Application Insights resource for trace ingestion.
- Access to the connected Application Insights resource for viewing traces. For log-based queries, you might also need access to the associated Log Analytics workspace.
- Python 3.10 or later (required for all code samples in this article).
- The
langchain-azure-aipackage version 0.1.0 or later (required for LangChain and LangGraph samples). - If you use LangChain or LangGraph, a Python environment with pip installed.
Confirm you can view telemetry
To view trace data, make sure your account has access to the connected Application Insights resource.- In the Azure portal, open the Application Insights resource connected to your Foundry project.
- Select Access control (IAM).
- Assign an appropriate role to your user or group. If you use log-based queries, start by granting the Log Analytics Reader role.
Security and privacy
Tracing can capture sensitive information (for example, user inputs, model outputs, and tool arguments and results).- Enable content recording during development and debugging to see full request and response data. Disable content recording in production environments to protect sensitive data. In the samples in this article, content recording is controlled by settings like
enable_content_recordingandOTEL_RECORD_CONTENT. - Don’t store secrets, credentials, or tokens in prompts or tool arguments.
Trace data stored in Application Insights is subject to your workspace’s data retention settings and Azure Monitor pricing. For cost management, consider adjusting sampling rates or retention periods in production. See Azure Monitor pricing and Configure data retention and archive.
Configure tracing for Microsoft Agent Framework and Semantic Kernel
Microsoft Foundry has native integrations with both Microsoft Agent Framework and Semantic Kernel. Agents built with either framework automatically emit traces when tracing is enabled for your Foundry project—no additional code or packages are required. To verify tracing is working:- Run your agent at least once.
- In the Foundry portal, go to Observability > Traces.
- Confirm a new trace appears with spans for your agent’s operations.
Configure tracing for LangChain and LangGraph
Tracing integration for LangChain and LangGraph is currently available only in Python.
langchain-azure-ai package to emit OpenTelemetry-compliant spans for LangChain and LangGraph operations. These traces appear in the Observability > Traces view in the Foundry portal.
Sample: LangChain v1 agent with Azure AI tracing
Use this end-to-end sample to instrument a LangChain v1 (preview) agent using thelangchain-azure-ai tracer. This tracer implements the latest OpenTelemetry (OTel) semantic conventions, so you can view rich traces in the Foundry observability view.
LangChain v1: Install packages
LangChain v1: Configure environment
APPLICATION_INSIGHTS_CONNECTION_STRING: Azure Monitor Application Insights connection string for tracing.AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL.AZURE_OPENAI_CHAT_DEPLOYMENT: The chat model deployment name.AZURE_OPENAI_VERSION: API version, for example2024-08-01-preview.- The SDK resolves Azure credentials using
DefaultAzureCredential, which supports environment variables, managed identity, and VS Code sign-in.
.env file for local development.
LangChain v1: Tracer setup
LangChain v1: Model setup (Azure OpenAI)
LangChain v1: Define tools and prompt
LangChain v1: Use runtime context and define a user-info tool
LangChain v1: Create the agent
LangChain v1: Run the agent with tracing
langchain-azure-ai enabled, all LangChain v1 operations (LLM calls, tool invocations, agent steps) emit OpenTelemetry spans using the latest semantic conventions. These traces appear in the Observability > Traces view in the Foundry portal and are linked to your Application Insights resource.
Verify your LangChain v1 traces
After running the agent:- Wait 2–5 minutes for traces to propagate.
- In the Foundry portal, go to Observability > Traces.
- Look for a trace with the name you specified (for example, “Weather information agent”).
- Expand the trace to see spans for LLM calls, tool invocations, and agent steps.
Sample: LangGraph agent with Azure AI tracing
This sample shows a simple LangGraph agent instrumented withlangchain-azure-ai to emit OpenTelemetry-compliant traces for graph steps, tool calls, and model invocations.
LangGraph: Install packages
LangGraph: Configure environment
APPLICATION_INSIGHTS_CONNECTION_STRING: Azure Monitor Application Insights connection string for tracing.AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL.AZURE_OPENAI_CHAT_DEPLOYMENT: The chat model deployment name.AZURE_OPENAI_VERSION: API version, for example2024-08-01-preview.
.env file for local development.
LangGraph tracer setup
LangGraph: Tools
LangGraph: Model setup (Azure OpenAI)
Build the LangGraph workflow
LangGraph: Run with tracing
langchain-azure-ai enabled, your LangGraph execution emits OpenTelemetry-compliant spans for model calls, tool invocations, and graph transitions. These traces flow to Application Insights and appear in the Observability > Traces view in the Foundry portal.
Verify your LangGraph traces
After running the agent:- Wait 2–5 minutes for traces to propagate.
- In the Foundry portal, go to Observability > Traces.
- Look for a trace with the name you specified (for example, “Music Player Agent”).
- Expand the trace to see spans for graph nodes, tool invocations, and model calls.
Sample: LangChain 0.3 setup with Azure AI tracing
This minimal setup shows how to enable Azure AI tracing in a LangChain 0.3 application using thelangchain-azure-ai tracer and AzureChatOpenAI.
LangChain 0.3: Install packages
LangChain 0.3: Configure environment
APPLICATION_INSIGHTS_CONNECTION_STRING: Application Insights connection string for tracing. To find this value, open your Application Insights resource in the Azure portal, select Overview, and copy the Connection String.AZURE_OPENAI_ENDPOINT: Azure OpenAI endpoint URL.AZURE_OPENAI_CHAT_DEPLOYMENT: Chat model deployment name.AZURE_OPENAI_VERSION: API version, for example2024-08-01-preview.AZURE_OPENAI_API_KEY: Azure OpenAI API key.
This sample uses API key authentication for simplicity. For production workloads, use
DefaultAzureCredential with get_bearer_token_provider as shown in the LangChain v1 and LangGraph samples.LangChain 0.3: Tracer and model setup
callbacks=[azure_tracer] to your chains, tools, or agents to ensure LangChain 0.3 operations are traced. After you run your chain or agent, traces appear in the Observability > Traces view in the Foundry portal within 2-5 minutes.
Configure tracing for OpenAI Agents SDK
The OpenAI Agents SDK supports OpenTelemetry instrumentation. Use the following snippet to configure tracing and export spans to Azure Monitor. IfAPPLICATION_INSIGHTS_CONNECTION_STRING isn’t set, the exporter falls back to the console for local debugging.
Before you run the sample, install the required packages:
Verify traces in the Foundry portal
- Sign in to Microsoft Foundry. Make sure the New Foundry toggle is on. These steps refer to Foundry (new).

- Confirm tracing is connected for your project. If needed, follow Set up tracing in Microsoft Foundry.
- Run your agent at least once.
- In the Foundry portal, go to Observability > Traces.
- Confirm a new trace appears with spans for your agent’s operations.
Troubleshoot common issues
| Issue | Cause | Resolution |
|---|---|---|
| You don’t see traces in Foundry | Tracing isn’t connected, there is no recent traffic, or ingestion is delayed | Confirm the Application Insights connection, generate new traffic, and refresh after 2–5 minutes. |
| You don’t see LangChain or LangGraph spans | Tracing callbacks aren’t attached to the run | Confirm you pass the tracer in callbacks (for example, config = {"callbacks": [azure_tracer]}) for the run you want to trace. |
| LangChain spans appear but tool calls are missing | Tools aren’t bound to the model or tool node isn’t configured | Verify tools are passed to bind_tools() on the model and that tool nodes are added to your graph. |
| Traces appear but are incomplete or missing spans | Content recording is disabled, or some operations aren’t instrumented | Enable enable_content_recording=True for full telemetry. For custom operations, add manual spans using the OpenTelemetry SDK. |
| You see authorization errors when you query telemetry | Missing RBAC permissions on Application Insights or Log Analytics | Confirm access in Access control (IAM) for the connected resources. For log queries, assign the Log Analytics Reader role. |
| Sensitive content appears in traces | Content recording is enabled and prompts, tool arguments, or outputs include sensitive data | Disable content recording in production and redact sensitive data before it enters telemetry. |
Next steps
- Learn core concepts and architecture in the Agent tracing overview.
- If you haven’t enabled tracing yet, see Set up tracing in Microsoft Foundry.
- Visualize agent health and performance metrics with the Agent Monitoring Dashboard.
- Explore the broader observability capabilities in Observability in generative AI.