Vercel AI SDK Integration
Add distributed tracing to your Vercel AI SDK applications. Works with
generateText,
streamText,
generateObject,
and custom tool calls.
Install
Add the Nexus SDK alongside the Vercel AI SDK:
npm install ai @nexus/sdk
Initialize both clients:
import { generateText } from 'ai'
import { openai } from '@ai-sdk/openai'
import { NexusClient } from '@nexus/sdk'
const nexus = new NexusClient({ apiKey: process.env.NEXUS_API_KEY! })
Basic trace wrapping
Wrap each top-level generateText
call in a Nexus trace:
async function callAgent(prompt: string): Promise {
const trace = await nexus.startTrace({
agentId: 'vercel-ai-agent',
input: prompt,
})
try {
const { text, usage } = await generateText({
model: openai('gpt-4o'),
prompt,
})
await nexus.endTrace(trace.id, {
output: text,
status: 'success',
metadata: {
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
model: 'gpt-4o',
},
})
return text
} catch (err) {
await nexus.endTrace(trace.id, { output: String(err), status: 'error' })
throw err
}
}
streamText observability
For streaming responses, start the trace before the stream and end it in the
onFinish callback:
import { streamText } from 'ai'
async function streamAgent(prompt: string) {
const trace = await nexus.startTrace({
agentId: 'vercel-ai-stream-agent',
input: prompt,
})
return streamText({
model: openai('gpt-4o'),
prompt,
onFinish: async ({ text, usage, finishReason }) => {
await nexus.endTrace(trace.id, {
output: text,
status: finishReason === 'stop' ? 'success' : 'error',
metadata: {
promptTokens: usage.promptTokens,
completionTokens: usage.completionTokens,
finishReason,
},
})
},
})
}
Tool call spans
Wrap each tool's execute
function with a Nexus span to capture tool input, output, and duration:
import { tool } from 'ai'
import { z } from 'zod'
function makeTracedTool(traceId: string, name: string, fn: Function) {
return async (args: unknown) => {
const span = await nexus.startSpan(traceId, {
name: 'tool:' + name,
type: 'tool',
metadata: { input: JSON.stringify(args) },
})
try {
const result = await fn(args)
await nexus.endSpan(span.id, { output: JSON.stringify(result) })
return result
} catch (err) {
await nexus.endSpan(span.id, { output: String(err), status: 'error' })
throw err
}
}
}
const trace = await nexus.startTrace({ agentId: 'tool-agent', input: userMessage })
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: userMessage,
tools: {
getWeather: tool({
description: 'Get current weather for a city',
parameters: z.object({ city: z.string() }),
execute: makeTracedTool(trace.id, 'getWeather', async ({ city }) => {
return fetchWeatherAPI(city)
}),
}),
},
})
Agent loop patterns
For multi-step agent loops, use one trace per user task and a span per step:
async function agentLoop(task: string, maxSteps = 5): Promise {
const trace = await nexus.startTrace({ agentId: 'loop-agent', input: task })
const messages: CoreMessage[] = [{ role: 'user', content: task }]
let step = 0
try {
while (step < maxSteps) {
step++
const span = await nexus.startSpan(trace.id, {
name: 'step.' + step,
type: 'llm',
metadata: { step, messageCount: messages.length },
})
const { text, toolCalls, finishReason } = await generateText({
model: openai('gpt-4o'),
messages,
tools: { /* ... */ },
})
await nexus.endSpan(span.id, {
output: text || '(tool calls)',
metadata: { finishReason, toolCallCount: toolCalls?.length ?? 0 },
})
if (finishReason === 'stop') {
await nexus.endTrace(trace.id, { output: text, status: 'success' })
return text
}
// Continue with tool results...
}
await nexus.endTrace(trace.id, { output: 'max steps reached', status: 'error' })
throw new Error('Agent exceeded max steps')
} catch (err) {
await nexus.endTrace(trace.id, { output: String(err), status: 'error' })
throw err
}
}
Metadata best practices
- Log model name on every span — enables cost and latency comparison across model versions
- Record promptTokens and completionTokens from
usage— essential for cost tracking - Include finishReason — distinguishes clean stops from tool-call loops and max-length truncation
- Add userId or sessionId to the trace — enables per-user debugging in production
- Use environment tag — separates dev/staging/prod traces in the dashboard
Start tracing your Vercel AI SDK app
Free plan includes 1,000 traces/month. No credit card required.