Docs / n8n

n8n Agent Observability

Add trace-level observability to n8n AI Agent workflows using HTTP Request nodes and the Nexus REST API — no SDK required.

Overview

n8n is a workflow automation platform with a visual node-based UI and a built-in AI Agent node that connects to any LLM. Because n8n is not a Python or TypeScript runtime, Nexus integrates through its REST API rather than an SDK.

The pattern uses four HTTP Request nodes — one before and three after the AI Agent — to start a trace, record the agent's output and token counts as a span, and close the trace with its final status. n8n expressions pass the trace_id from the first HTTP node to all subsequent nodes automatically.

What you get

  • Per-run latency and success/error rates in your Nexus dashboard
  • AI Agent output captured as span output
  • Input and output token counts from the AI Agent node
  • Error details from failed runs
  • No SDK install — pure HTTP

You will need a Nexus API key from your dashboard and an agent_id string that identifies this workflow (e.g. my-n8n-agent).

The workflow

Node 1 — Start Trace (HTTP Request)

Add an HTTP Request node before your AI Agent with these settings:

  • Method: POST
  • URL: https://nexus.keylightdigital.dev/api/v1/traces
  • Header: Authorization: Bearer YOUR_NEXUS_API_KEY
  • Body content type: JSON

Body:

{
  "agent_id": "my-n8n-agent",
  "name": "agent_run",
  "status": "running",
  "started_at": "{{ $now.toISO() }}"
}

The response contains trace_id — a UUID you will reference in every subsequent node using $('Start Trace').first().json.trace_id. Rename this node Start Trace so the expression resolves correctly.

Node 2 — AI Agent

Connect your existing AI Agent node after the Start Trace node. No changes are needed. The AI Agent node output includes:

  • output — the agent's text response
  • tokenUsageData.promptTokens — input tokens
  • tokenUsageData.completionTokens — output tokens
  • tokenUsageData.totalTokens — total tokens

Rename this node AI Agent so the expressions in Node 3 resolve correctly.

Node 3 — Record Span (HTTP Request)

Add a second HTTP Request node after the AI Agent:

  • Method: POST
  • URL: https://nexus.keylightdigital.dev/api/v1/traces/{{ $('Start Trace').first().json.trace_id }}/spans
  • Header: Authorization: Bearer YOUR_NEXUS_API_KEY
  • Body content type: JSON

Body:

{
  "name": "agent_run",
  "status": "success",
  "started_at": "{{ $execution.startedAt }}",
  "ended_at": "{{ $now.toISO() }}",
  "output": "{{ $('AI Agent').first().json.output.slice(0, 500) }}",
  "metadata": {
    "input_tokens": "{{ $('AI Agent').first().json.tokenUsageData.promptTokens }}",
    "output_tokens": "{{ $('AI Agent').first().json.tokenUsageData.completionTokens }}",
    "total_tokens": "{{ $('AI Agent').first().json.tokenUsageData.totalTokens }}"
  }
}

$execution.startedAt is the ISO timestamp when the workflow started — a good approximation for when the agent run began. The .slice(0, 500) cap keeps the payload within Nexus's 4 KB metadata limit.

Node 4 — Close Trace (HTTP Request)

Add a third HTTP Request node after Record Span:

  • Method: PATCH
  • URL: https://nexus.keylightdigital.dev/api/v1/traces/{{ $('Start Trace').first().json.trace_id }}
  • Header: Authorization: Bearer YOUR_NEXUS_API_KEY
  • Body content type: JSON

Body:

{
  "status": "success",
  "ended_at": "{{ $now.toISO() }}"
}

n8n expressions reference

Use these expressions to wire up the four nodes. All expressions use double curly braces and are entered in the HTTP Request node's URL or body fields.

Expression What it returns
$('Start Trace').first().json.trace_id UUID of the trace, from the POST /v1/traces response
$('AI Agent').first().json.output The agent's text response
$('AI Agent').first().json.tokenUsageData.promptTokens Input token count for the run
$('AI Agent').first().json.tokenUsageData.completionTokens Output token count for the run
$execution.startedAt ISO timestamp when the workflow execution started
$now.toISO() Current ISO timestamp (Luxon DateTime)

Node names are case-sensitive. If you rename your AI Agent or Start Trace node, update all references. Check Output in the n8n node panel to inspect the actual response shape and verify field names match your LLM provider.

Error handling

Connect the AI Agent node's error output (the red connector) to two additional HTTP Request nodes that record the error before the workflow stops:

Error span body (POST to …/traces/TRACE_ID/spans):

{
  "name": "agent_run",
  "status": "error",
  "started_at": "{{ $execution.startedAt }}",
  "ended_at": "{{ $now.toISO() }}",
  "error": "{{ $json.error.message || 'Unknown error' }}"
}

Close trace with error (PATCH to …/traces/TRACE_ID):

{
  "status": "error",
  "ended_at": "{{ $now.toISO() }}"
}

In n8n, enable Continue on Fail on the AI Agent node if you want the workflow to proceed to the error branch rather than stopping. Without it, the workflow execution stops and the error HTTP nodes will not be reached.

Troubleshooting

401 Unauthorized from Nexus
Check that the Authorization header is set to Bearer YOUR_KEY (not just the key). Store the key as an n8n credential to avoid hardcoding it in the workflow.
404 on the span or close-trace node
The trace_id expression is likely wrong or the node name does not match. Open the Start Trace node's output panel in n8n and confirm the field is named trace_id, then copy the expression exactly.
tokenUsageData is undefined
Not all n8n LLM providers populate tokenUsageData. Inspect the AI Agent node's raw output in the n8n execution log to see what fields are available and adjust the expression accordingly. You can default to zero with the ?? operator: $('AI Agent').first().json.tokenUsageData?.promptTokens ?? 0.
Trace shows "running" but never closes
The PATCH node did not execute — either the workflow stopped early due to an upstream error, or the node is not connected. Check the n8n execution log for which nodes ran. Make sure the Close Trace node is wired in both the success path and the error branch.