Docs › Pydantic AI
Integration Guide
Monitor Pydantic AI Agents with Nexus
Add tracing to AI agents built with Pydantic AI. Track every agent run, tool call, and structured output — with minimal boilerplate.
Why use Nexus with Pydantic AI?
- ✓ Structured output tracing — log validated Pydantic model outputs as span data
- ✓ Tool call spans — every
@agent.toolappears as a named span - ✓ Decorator-friendly — wrap agents and tools with minimal code changes
- ✓ Error alerts — get emailed when any agent run fails (Pro)
Step 1 — Install the SDK
pip install keylightdigital-nexus pydantic-ai
Step 2 — Create an API key
Go to /dashboard/keys
and create a new API key. Store it as NEXUS_API_KEY.
Step 3 — Instrument your Pydantic AI agent
Basic pattern — wrap agent.run() with a Nexus trace
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from nexus_client import NexusClient
from pydantic import BaseModel
import os
nexus = NexusClient(
api_key=os.environ["NEXUS_API_KEY"],
agent_id="pydantic-research-agent",
)
# Define a structured output model
class ResearchResult(BaseModel):
summary: str
key_points: list[str]
confidence: float
# Create the Pydantic AI agent
model = OpenAIModel("gpt-4o", api_key=os.environ["OPENAI_API_KEY"])
agent = Agent(model, result_type=ResearchResult)
async def run_agent(query: str) -> ResearchResult:
trace = nexus.start_trace(
name=f"Research: {query[:60]}",
metadata={"query": query, "output_type": "ResearchResult"},
)
try:
trace.add_span(
name="agent-run-start",
input={"query": query},
)
result = await agent.run(query)
trace.add_span(
name="agent-run-complete",
output={
"summary": result.data.summary,
"key_point_count": len(result.data.key_points),
"confidence": result.data.confidence,
"cost": result.cost().total_tokens if result.cost() else None,
},
)
trace.end(status="success")
return result.data
except Exception as e:
trace.add_span(name="agent-run-error", error=str(e))
trace.end(status="error")
raise
Tool tracing with @agent.tool decorator
from pydantic_ai import Agent, RunContext
from nexus_client import NexusClient
import os
nexus = NexusClient(
api_key=os.environ["NEXUS_API_KEY"],
agent_id="pydantic-tool-agent",
)
agent = Agent("openai:gpt-4o")
# Shared trace object (set per-run)
_current_trace = None
@agent.tool
async def web_search(ctx: RunContext[None], query: str) -> str:
"""Search the web for current information."""
if _current_trace:
_current_trace.add_span(
name="tool-web_search",
input={"query": query},
output={"result": f"Search results for: {query}"},
)
# Your real search implementation here
return f"Results for '{query}': [search results]"
@agent.tool
async def read_file(ctx: RunContext[None], path: str) -> str:
"""Read a file from the filesystem."""
if _current_trace:
_current_trace.add_span(
name="tool-read_file",
input={"path": path},
)
with open(path) as f:
return f.read()
async def run_with_tools(task: str) -> str:
global _current_trace
_current_trace = nexus.start_trace(
name=f"Agent with tools: {task[:60]}",
)
try:
result = await agent.run(task)
_current_trace.end(status="success")
return str(result.data)
except Exception as e:
_current_trace.add_span(name="error", error=str(e))
_current_trace.end(status="error")
raise
finally:
_current_trace = None
What you'll see in Nexus
- Trace list — every agent run with status, duration, and agent name
- Tool span waterfall — each
@agent.toolcall as a timed bar - Structured output — Pydantic model fields logged in span output
- Error alerts — Pro users get an email when validation or runtime errors occur
Next steps
- API Reference — full REST API documentation
- Interactive demo — see sample traces without signing up
- LangChain guide — another popular Python framework
- Blog: How to Monitor AI Agents in Production
- Nexus pricing — free plan or $9/mo Pro
- GitHub — open-source SDK
Start monitoring your Pydantic AI agents
Free plan: 1,000 traces/month. No credit card needed.