Tracing Mastra Agents: Observability for TypeScript Agent Workflows
Mastra is a TypeScript-native agent framework with Agents, Workflows, and Networks built for Node.js and Vercel. When a workflow step fails silently, a tool call throws on malformed JSON, or a network routes to the wrong agent, you need trace visibility to debug it. Here's how to instrument Mastra with Nexus.
What Mastra adds
Mastra is a TypeScript-native agent framework that gives you three main primitives: Agents (LLM-backed entities with tools and memory), Workflows (DAG-based pipelines with conditional branching and parallel steps), and Networks (collections of agents that can route between each other). It runs on Node.js and integrates natively with Vercel's ecosystem.
TypeScript agent frameworks have a distinct failure surface compared to Python:
- Async workflow steps fail silently: a Promise that rejects in a workflow step can be caught by the framework and marked as a soft error rather than surfacing to your monitoring.
- Tool call type errors at runtime: TypeScript types are compile-time only — a tool that receives malformed JSON from the LLM will throw at runtime without a type error in development.
- Memory reads are invisible: Mastra's built-in memory abstraction stores and retrieves context across agent turns, but memory retrieval failures or stale context are hard to debug without trace visibility.
- Network routing errors: when an agent network routes to the wrong agent, there's no built-in way to trace which routing decision was made and why.
Instrumenting a Mastra Agent
Mastra agents expose a generate() method that runs a single turn. Wrap it in a Nexus trace to capture latency, tool calls, and errors:
import { Agent } from '@mastra/core/agent';
import { openai } from '@ai-sdk/openai';
import { NexusClient } from 'nexus-sdk';
const nexus = new NexusClient({ apiKey: process.env.NEXUS_API_KEY! });
const researchAgent = new Agent({
name: 'research-agent',
instructions: 'You are a research assistant. Search for information and summarize findings.',
model: openai('gpt-4o'),
tools: { /* your tools */ },
});
async function runWithTracing(
prompt: string,
userId: string,
): Promise<string> {
const trace = await nexus.startTrace({
agentId: 'mastra-research-agent',
name: `research: ${prompt.slice(0, 60)}`,
status: 'running',
startedAt: new Date().toISOString(),
metadata: {
userId,
promptLength: prompt.length,
environment: process.env.NODE_ENV ?? 'development',
},
});
const t0 = Date.now();
try {
const result = await researchAgent.generate(prompt);
const latencyMs = Date.now() - t0;
await nexus.endTrace(trace.traceId, {
status: 'success',
latencyMs,
metadata: {
outputLength: result.text.length,
finishReason: result.finishReason,
usage: result.usage,
},
});
return result.text;
} catch (error) {
await nexus.endTrace(trace.traceId, {
status: 'error',
latencyMs: Date.now() - t0,
error: error instanceof Error ? error.message : String(error),
});
throw error;
}
}
Tracing Mastra Workflows
Mastra Workflows are DAG-based step pipelines. Each step maps cleanly to a Nexus span — emit one span per step with latency and step-specific metadata:
import { Workflow, Step } from '@mastra/core/workflows';
import { NexusClient } from 'nexus-sdk';
const nexus = new NexusClient({ apiKey: process.env.NEXUS_API_KEY! });
// Wrap each step executor to emit spans
function tracedStep<T>(
stepName: string,
traceId: string,
fn: () => Promise<T>,
): Promise<T> {
const t0 = Date.now();
return fn()
.then(async (result) => {
await nexus.addSpan(traceId, {
name: `step:${stepName}`,
status: 'success',
latencyMs: Date.now() - t0,
metadata: { stepName },
});
return result;
})
.catch(async (error) => {
await nexus.addSpan(traceId, {
name: `step:${stepName}`,
status: 'error',
latencyMs: Date.now() - t0,
error: error instanceof Error ? error.message : String(error),
});
throw error;
});
}
async function runWorkflowWithTracing(input: string, userId: string) {
const trace = await nexus.startTrace({
agentId: 'mastra-research-workflow',
name: `workflow: ${input.slice(0, 60)}`,
status: 'running',
startedAt: new Date().toISOString(),
metadata: { userId },
});
const t0 = Date.now();
try {
// Step 1: fetch context
const context = await tracedStep('fetch-context', trace.traceId, async () => {
return fetchContext(input);
});
// Step 2: research
const research = await tracedStep('research', trace.traceId, async () => {
return researchAgent.generate(`Research: ${input}\nContext: ${context}`);
});
// Step 3: summarize
const summary = await tracedStep('summarize', trace.traceId, async () => {
return summarizerAgent.generate(`Summarize: ${research.text}`);
});
await nexus.endTrace(trace.traceId, {
status: 'success',
latencyMs: Date.now() - t0,
});
return summary.text;
} catch (error) {
await nexus.endTrace(trace.traceId, {
status: 'error',
latencyMs: Date.now() - t0,
error: error instanceof Error ? error.message : String(error),
});
throw error;
}
}
Tracking tool calls
Mastra tool calls happen inside generate(). To trace individual tool executions, wrap your tool functions with a span emitter:
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
function tracedTool<T>(
toolName: string,
traceId: string,
fn: (input: unknown) => Promise<T>,
): (input: unknown) => Promise<T> {
return async (input: unknown) => {
const t0 = Date.now();
try {
const result = await fn(input);
await nexus.addSpan(traceId, {
name: `tool:${toolName}`,
status: 'success',
latencyMs: Date.now() - t0,
metadata: {
inputSummary: JSON.stringify(input).slice(0, 200),
},
});
return result;
} catch (error) {
await nexus.addSpan(traceId, {
name: `tool:${toolName}`,
status: 'error',
latencyMs: Date.now() - t0,
error: error instanceof Error ? error.message : String(error),
});
throw error;
}
};
}
// Create a tool with tracing
function makeTracedSearchTool(traceId: string) {
return createTool({
id: 'web-search',
description: 'Search the web for information',
inputSchema: z.object({ query: z.string() }),
execute: tracedTool('web-search', traceId, async ({ query }) => {
return performWebSearch(query);
}),
});
}
Adding metadata for filtering
The most useful Mastra traces include metadata that lets you filter by workflow version, user segment, or environment in the Nexus dashboard:
const trace = await nexus.startTrace({
agentId: 'mastra-research-agent',
name: `research: ${prompt.slice(0, 60)}`,
status: 'running',
startedAt: new Date().toISOString(),
metadata: {
// User context
userId,
userPlan: user.plan, // 'free' | 'pro'
// Workflow context
workflowVersion: '2.1.0',
stepCount: workflow.steps.length,
// Environment
environment: process.env.NODE_ENV ?? 'development',
region: process.env.VERCEL_REGION ?? 'local',
// Input characteristics
promptTokenEstimate: Math.ceil(prompt.length / 4),
},
});
Next steps
With Mastra traces flowing into Nexus, you get per-workflow latency breakdown, tool call failure rates, and error rate trends per agent. The Nexus Pro plan adds webhook alerts so you get notified when a workflow's error rate spikes above your threshold — without having to monitor dashboards manually.
Sign up for a free Nexus account and add observability to your Mastra agents in under 5 minutes.
Add observability to Mastra
Free tier, no credit card required. TypeScript SDK included.