Docs LangChain

Integration Guide

LangChain Observability with Nexus

Monitor your LangChain agents with Nexus. Add 3 lines of code to track every chain invocation, LLM call, and tool use — then view traces in the dashboard.

Why use Nexus with LangChain?

Step 1 — Install the SDK

TypeScript

npm install @keylightdigital/nexus

Python

pip install keylightdigital-nexus

Step 2 — Create an API key

Go to /dashboard/keys and create a new API key. Copy the key — it's only shown once.

Step 3 — Instrument your LangChain agent

TypeScript (with LangChain.js)

import { NexusClient } from '@keylightdigital/nexus';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
import { DynamicTool } from '@langchain/core/tools';

const nexus = new NexusClient({
  apiKey: process.env.NEXUS_API_KEY!,
  agentId: 'langchain-research-agent',
});

async function runAgent(userQuery: string) {
  // Start a Nexus trace for this agent invocation
  const trace = await nexus.startTrace({
    name: `Research: ${userQuery.slice(0, 50)}`,
    metadata: { query: userQuery },
  });

  try {
    // Instrument LLM call
    const llmSpan = await trace.addSpan({
      name: 'llm-call',
      input: { query: userQuery },
    });

    const model = new ChatOpenAI({ modelName: 'gpt-4o-mini' });
    const tools = [
      new DynamicTool({
        name: 'search',
        description: 'Search the web',
        func: async (input: string) => {
          // Instrument tool use as a child span
          await trace.addSpan({
            name: 'tool-search',
            input: { query: input },
            output: { result: 'search results...' },
          });
          return 'web search results for: ' + input;
        },
      }),
    ];

    const agent = await createOpenAIFunctionsAgent({ llm: model, tools, prompt: /* ... */ });
    const executor = new AgentExecutor({ agent, tools });
    const result = await executor.invoke({ input: userQuery });

    await trace.end({ status: 'success' });
    return result;
  } catch (error) {
    await trace.end({ status: 'error' });
    throw error;
  }
}

Python (with LangChain)

from nexus_client import NexusClient
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain.tools import tool
import os

nexus = NexusClient(
    api_key=os.environ["NEXUS_API_KEY"],
    agent_id="langchain-research-agent",
)

@tool
def search(query: str) -> str:
    """Search the web for information."""
    return f"Search results for: {query}"

def run_agent(user_query: str):
    # Start a Nexus trace for this invocation
    trace = nexus.start_trace(
        name=f"Research: {user_query[:50]}",
        metadata={"query": user_query},
    )

    try:
        # Track the LLM initialization
        trace.add_span(
            name="agent-init",
            input={"query": user_query},
        )

        llm = ChatOpenAI(model="gpt-4o-mini")
        tools = [search]
        agent = create_openai_functions_agent(llm, tools, prompt=...)
        executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

        result = executor.invoke({"input": user_query})

        # Track each tool call inline
        for step in result.get("intermediate_steps", []):
            action, observation = step
            trace.add_span(
                name=f"tool-{action.tool}",
                input={"query": action.tool_input},
                output={"result": str(observation)[:500]},
            )

        trace.end(status="success")
        return result["output"]

    except Exception as e:
        trace.end(status="error")
        raise

Step 4 — View traces in the dashboard

Run your agent and navigate to /dashboard/traces to see traces appear in real time. Click any trace to view the span waterfall — each LLM call and tool use is a separate span with timing, input, and output.

View demo with sample LangChain traces →

Common patterns

Tracking multi-step chains

Call trace.addSpan() before and after each chain step. The name field maps to what you see in the waterfall.

Capturing errors

Wrap your chain in try/catch. Call trace.end({'{'}status: 'error'{'}'}) and pass error: err.message to the failing span.

Parallel tool calls

LangChain sometimes calls tools in parallel (with createOpenAIToolsAgent). Add each tool span individually — they'll appear in the waterfall ordered by start time.

More resources

Start monitoring your LangChain agents

Free plan: 1,000 traces/month. No credit card needed.