← Blog · 2026-04-09 · 5 min read

How to Add Tracing to Your LangChain Agent in 5 Minutes

A step-by-step tutorial for adding Nexus observability to a LangChain agent. Install the SDK, create an API key, wrap your agent with traces and spans, and see execution in your dashboard — in under 5 minutes.

LangChain agents are easy to build and notoriously hard to debug in production. A tool call silently times out. The LLM picks the wrong tool. Context overflows mid-task. Without traces, you're guessing.

This tutorial shows you how to add Nexus tracing to any LangChain agent — TypeScript or Python — in about 5 minutes. By the end, every agent run will appear in your Nexus dashboard with full span-by-span detail.

Prerequisites

Step 1: Install the SDK

Install the Nexus SDK alongside your LangChain dependencies:

TypeScript / npm

bash
npm install @keylightdigital/nexus langchain @langchain/openai

Python / pip

bash
pip install nexus-agent langchain langchain-openai

Step 2: Initialize the client

Create a NexusClient with your API key and an agent ID. The agent ID can be any string — it groups all traces from this agent together in your dashboard.

TypeScript
import { NexusClient } from '@keylightdigital/nexus'

const nexus = new NexusClient({
  apiKey: process.env.NEXUS_API_KEY!,   // from nexus.keylightdigital.dev/dashboard/keys
  agentId: 'my-langchain-agent',
})

Step 3: Wrap your agent run with a trace

Wrap each agent invocation with startTrace() and trace.end(). Add spans inside the trace for each logical step. Here's a complete working example with a LangChain OpenAI functions agent:

TypeScript

TypeScript
import { ChatOpenAI } from '@langchain/openai'
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents'
import { pull } from 'langchain/hub'
import { TavilySearchResults } from '@langchain/community/tools/tavily_search'
import { NexusClient } from '@keylightdigital/nexus'

const nexus = new NexusClient({
  apiKey: process.env.NEXUS_API_KEY!,
  agentId: 'research-agent',
})

async function runAgent(question: string) {
  // Step 1: start a trace for this agent run
  const trace = await nexus.startTrace({
    name: `research: ${question.slice(0, 60)}`,
    metadata: { question },
  })

  try {
    const tools = [new TavilySearchResults({ maxResults: 3 })]
    const prompt = await pull('hwchase17/openai-functions-agent')
    const llm = new ChatOpenAI({ modelName: 'gpt-4o', temperature: 0 })
    const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt })
    const executor = new AgentExecutor({ agent, tools })

    // Step 2: wrap the agent run with a span
    const agentSpan = await trace.addSpan({
      name: 'agent-executor-run',
      input: { question },
    })

    const result = await executor.invoke({ input: question })

    await agentSpan.end({ output: { answer: result.output }, status: 'ok' })
    await trace.end({ status: 'success' })
    return result.output
  } catch (err) {
    await trace.end({ status: 'error' })
    throw err
  }
}

// Usage
runAgent('What is the latest news on LLM observability?').then(console.log)

Python

Python
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain import hub
from nexus_agent import NexusClient
import os

nexus = NexusClient(
    api_key=os.environ['NEXUS_API_KEY'],
    agent_id='research-agent',
)

def run_agent(question: str) -> str:
    # Step 1: start a trace
    trace = nexus.start_trace(
        name=f'research: {question[:60]}',
        metadata={'question': question},
    )

    try:
        tools = [TavilySearchResults(max_results=3)]
        prompt = hub.pull('hwchase17/openai-functions-agent')
        llm = ChatOpenAI(model='gpt-4o', temperature=0)
        agent = create_openai_functions_agent(llm, tools, prompt)
        executor = AgentExecutor(agent=agent, tools=tools)

        # Step 2: wrap the executor call
        agent_span = trace.add_span(
            name='agent-executor-run',
            input={'question': question},
        )

        result = executor.invoke({'input': question})
        agent_span.end(output={'answer': result['output']}, status='ok')
        trace.end(status='success')
        return result['output']
    except Exception as e:
        trace.end(status='error')
        raise

Step 4: Add spans for individual tool calls (optional)

For full visibility into what the agent is doing, add spans around tool calls. This shows you exactly which tools were called, what inputs they received, and whether they succeeded:

TypeScript
// Instrument individual tool calls for full visibility
const searchSpan = await trace.addSpan({
  name: 'tavily-search',
  input: { query: searchQuery },
})

const results = await searchTool.invoke(searchQuery)

await searchSpan.end({
  output: { result_count: results.length, results },
  status: 'ok',
})

Once this is in place, the trace detail page will show a waterfall of spans: the agent executor at the top, individual tool calls underneath, each with timing and I/O.

Advanced: Automatic LLM span capture with callbacks

LangChain's callback system lets you hook into every LLM call, tool call, and chain step. You can build a reusable NexusCallbackHandler that automatically creates spans for every LLM invocation — no per-call instrumentation needed:

TypeScript
import { BaseCallbackHandler } from 'langchain/callbacks'
import type { Serialized } from 'langchain/load/serializable'
import type { LLMResult } from 'langchain/schema'
import type { Trace } from '@keylightdigital/nexus'

class NexusCallbackHandler extends BaseCallbackHandler {
  name = 'NexusCallbackHandler'
  private spanMap = new Map<string, Awaited<ReturnType<Trace['addSpan']>>>()

  constructor(private trace: Trace) {
    super()
  }

  async handleLLMStart(_llm: Serialized, prompts: string[], runId: string) {
    const span = await this.trace.addSpan({
      name: 'llm-call',
      input: { prompts },
    })
    this.spanMap.set(runId, span)
  }

  async handleLLMEnd(output: LLMResult, runId: string) {
    const span = this.spanMap.get(runId)
    if (span) {
      await span.end({ output: output.generations, status: 'ok' })
      this.spanMap.delete(runId)
    }
  }

  async handleLLMError(err: Error, runId: string) {
    const span = this.spanMap.get(runId)
    if (span) {
      await span.end({ error: err.message, status: 'error' })
      this.spanMap.delete(runId)
    }
  }
}

// Usage: attach to your LLM or executor
const trace = await nexus.startTrace({ name: 'agent-run' })
const handler = new NexusCallbackHandler(trace)
const result = await executor.invoke({ input: question }, { callbacks: [handler] })
await trace.end({ status: 'success' })

Step 5: View your traces

Run your agent, then open /dashboard/traces. You'll see each agent run as a trace with:

What you get in the dashboard

  • ✓ Trace list with status, duration, agent name, and timestamp
  • ✓ Per-trace span waterfall with relative timing
  • ✓ Input/output captured for each span
  • ✓ Error messages and failure details
  • ✓ Filter by agent, status, date range
  • ✓ Shareable public trace links for debugging with teammates

What's next

Start monitoring your LangChain agents free

Free tier includes 1,000 traces/month and full trace viewer. No credit card required.

Get started free →