Trace Metadata Best Practices
Structured metadata turns raw traces into searchable, debuggable data. This guide shows the recommended keys for each span type so you get the most out of the Nexus trace viewer.
Why metadata matters
Every span in Nexus can carry a metadata object — a flat or nested JSON blob.
Nexus stores it as-is and surfaces key fields in the trace detail view. The more structured your metadata,
the easier it is to filter by model, find expensive tool calls, or trace error patterns across thousands of runs.
LLM call metadata
For spans that wrap a single LLM API call, include at minimum: model, token counts, and a truncated prompt_preview.
await span.end({
status: 'success',
metadata: {
model: 'claude-sonnet-4-6', // which model was called
input_tokens: usage.input_tokens, // tokens in the prompt
output_tokens: usage.output_tokens, // tokens in the completion
stop_reason: message.stop_reason, // 'end_turn', 'tool_use', etc.
prompt_preview: prompt.slice(0, 200), // first 200 chars of input
},
})
span.end(
status="success",
metadata={
"model": "claude-sonnet-4-6",
"input_tokens": message.usage.input_tokens,
"output_tokens": message.usage.output_tokens,
"stop_reason": message.stop_reason,
"prompt_preview": prompt[:200],
}
)
Tool call metadata
For spans that represent a single tool execution, capture the tool name, input args (truncated), and a preview of the result.
await span.end({
status: 'success',
metadata: {
tool_name: 'web_search',
args_preview: JSON.stringify(toolInput).slice(0, 300),
result_preview: toolResult.slice(0, 300),
duration_ms: Date.now() - spanStart,
},
})
import json, time
span.end(
status="success",
metadata={
"tool_name": "web_search",
"args_preview": json.dumps(tool_input)[:300],
"result_preview": str(tool_result)[:300],
"duration_ms": int((time.time() - span_start) * 1000),
}
)
Error metadata
When a span or trace ends with status: 'error', include the error type, message, and a stack trace preview so you can diagnose failures directly in the Nexus dashboard.
} catch (err) {
const error = err instanceof Error ? err : new Error(String(err))
await trace.end({
status: 'error',
metadata: {
error_type: error.name, // 'TypeError', 'RateLimitError', etc.
error_message: error.message,
stack_preview: error.stack?.slice(0, 500),
},
})
}
except Exception as err:
import traceback
trace.end(
status="error",
metadata={
"error_type": type(err).__name__,
"error_message": str(err),
"stack_preview": traceback.format_exc()[:500],
}
)
Custom & business metadata
Add any domain-specific fields that help you segment and filter traces. Common examples: user identifiers, environment tags, and pipeline stage markers.
const trace = await nexus.startTrace({
name: 'invoice-processing-agent',
metadata: {
environment: process.env.NODE_ENV, // 'production', 'staging'
user_id: req.user.id, // correlate with your own data
pipeline_stage: 'extraction', // where in your pipeline
invoice_id: invoice.id, // domain-specific ID
},
})
trace = nexus.start_trace(
name="invoice-processing-agent",
metadata={
"environment": os.getenv("ENV", "development"),
"user_id": user.id,
"pipeline_stage": "extraction",
"invoice_id": invoice.id,
}
)
Searchable in traces
The Nexus trace list lets you filter by agent name, trace name, and status. Metadata fields are visible in the trace detail view. Use consistent key names across your spans to make debugging fast:
| Metadata key | Span type | Why it helps |
|---|---|---|
| model | LLM call | Filter by model name; compare costs across models |
| input_tokens | LLM call | Estimate prompt costs; identify expensive inputs |
| output_tokens | LLM call | Track generation length; correlate with latency |
| tool_name | Tool call | See which tools fire most; find slow tool calls |
| error_type | Error | Group failures by class; spot RateLimitError spikes |
| environment | Any | Separate prod from staging in a shared account |
Ready to start capturing rich trace metadata?
Start free → Back to docs