Docs › OpenAI Agents SDK
Integration Guide
Monitor OpenAI Agents SDK with Nexus
Instrument agents built with OpenAI's Agents SDK. Track every LLM call, tool execution, and agent handoff — in TypeScript or Python.
Why use Nexus with OpenAI Agents SDK?
- ✓ Agent + tool spans — every tool call and handoff appears as a span
- ✓ Trace the full loop — from user input to final response across all turns
- ✓ Error alerts — get emailed when any agent run fails (Pro)
- ✓ TypeScript + Python — works with both SDK flavors
Step 1 — Install both SDKs
TypeScript
npm install @keylightdigital/nexus openai
Python
pip install keylightdigital-nexus openai
Step 2 — Create an API key
Go to /dashboard/keys
and create a new API key. Add it as NEXUS_API_KEY
alongside your OPENAI_API_KEY.
Step 3 — Instrument your agent
TypeScript — agent with tool calls
import OpenAI from 'openai';
import { NexusClient } from '@keylightdigital/nexus';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const nexus = new NexusClient({
apiKey: process.env.NEXUS_API_KEY!,
agentId: 'openai-research-agent',
});
async function runAgent(userMessage: string) {
const trace = await nexus.startTrace({
name: \`OpenAI agent: \${userMessage.slice(0, 60)}\`,
metadata: { model: 'gpt-4o' },
});
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
{ role: 'user', content: userMessage }
];
try {
for (let turn = 1; turn <= 5; turn++) {
await trace.addSpan({ name: \`llm-call-turn-\${turn}\`, input: { turn } });
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages,
});
const choice = response.choices[0];
await trace.addSpan({
name: \`llm-response-turn-\${turn}\`,
output: { finish_reason: choice.finish_reason },
});
if (choice.finish_reason === 'stop') {
await trace.end({ status: 'success' });
return choice.message.content;
}
}
await trace.end({ status: 'error' });
} catch (error) {
await trace.end({ status: 'error' });
throw error;
}
}
Python — agent with tool calls
from openai import OpenAI
from nexus_client import NexusClient
import os
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
nexus = NexusClient(
api_key=os.environ["NEXUS_API_KEY"],
agent_id="openai-research-agent",
)
def run_agent(user_message: str) -> str:
trace = nexus.start_trace(
name=f"OpenAI agent: {user_message[:60]}",
metadata={"model": "gpt-4o"},
)
messages = [{"role": "user", "content": user_message}]
try:
for turn in range(1, 6):
trace.add_span(name=f"llm-call-turn-{turn}", input={"turn": turn})
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
)
choice = response.choices[0]
trace.add_span(
name=f"llm-response-turn-{turn}",
output={"finish_reason": choice.finish_reason},
)
if choice.finish_reason == "stop":
trace.end(status="success")
return choice.message.content or ""
trace.end(status="error")
return "Max turns reached"
except Exception:
trace.end(status="error")
raise
What you'll see in Nexus
- Trace list — every agent run as a row with status, duration, and agent name
- Span waterfall — each LLM call and tool use as a timed bar
- Input/output inspector — click any span to expand the full prompt and response
- Error alerts — Pro users get an email when any agent run fails
Next steps
- API Reference — full REST API documentation
- Interactive demo — see sample traces without signing up
- Anthropic SDK guide — if you use Claude instead
- Blog: How to Monitor AI Agents in Production
- Nexus pricing — free plan or $9/mo Pro
- GitHub — open-source SDK
Start monitoring your OpenAI agents
Free plan: 1,000 traces/month. No credit card needed.