Integration guide

OpenAI SDK Integration

Add Nexus observability to your OpenAI API calls in minutes. Works with GPT-4o, o1, the Assistants API, and any model accessible via the OpenAI Python or TypeScript SDK.

Quickstart

Install both SDKs, then wrap your OpenAI calls inside a Nexus trace. The trace captures timing, status, and any metadata you add — including token counts.

TypeScript
npm install openai @keylightdigital/nexus
Python
pip install openai keylightdigital-nexus

TypeScript example

Wrap each OpenAI call in a Nexus span. Record the model, prompt tokens, and completion tokens in metadata for cost analysis.

import OpenAI from 'openai'
import { NexusClient } from '@keylightdigital/nexus'

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY })
const nexus = new NexusClient({
  apiKey: process.env.NEXUS_API_KEY!,
  agentId: 'my-gpt-agent',
})

async function runAgent(prompt: string) {
  const trace = await nexus.startTrace({ name: 'gpt4-completion' })

  try {
    const span = await trace.startSpan({ name: 'openai.chat.completions' })

    const response = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: prompt }],
    })

    const usage = response.usage
    await span.end({
      status: 'success',
      metadata: {
        model: 'gpt-4o',
        prompt_tokens: usage?.prompt_tokens,
        completion_tokens: usage?.completion_tokens,
        total_tokens: usage?.total_tokens,
      },
    })

    await trace.end({ status: 'success' })
    return response.choices[0].message.content
  } catch (err) {
    await trace.end({ status: 'error', metadata: { error: String(err) } })
    throw err
  }
}

Python example

import os
from openai import OpenAI
from keylightdigital_nexus import NexusClient

openai_client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
nexus = NexusClient(
    api_key=os.environ["NEXUS_API_KEY"],
    agent_id="my-gpt-agent",
)

def run_agent(prompt: str) -> str:
    trace = nexus.start_trace(name="gpt4-completion")
    try:
        span = trace.start_span(name="openai.chat.completions")

        response = openai_client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": prompt}],
        )

        usage = response.usage
        span.end(
            status="success",
            metadata={
                "model": "gpt-4o",
                "prompt_tokens": usage.prompt_tokens if usage else None,
                "completion_tokens": usage.completion_tokens if usage else None,
                "total_tokens": usage.total_tokens if usage else None,
            },
        )

        trace.end(status="success")
        return response.choices[0].message.content or ""
    except Exception as err:
        trace.end(status="error", metadata={"error": str(err)})
        raise

Tracking token usage

Recording token counts in span metadata lets you analyze cost per trace in the Nexus dashboard. All metadata is stored as JSON and searchable in the trace list.

prompt_tokens
Tokens in the input (system prompt + user message)
completion_tokens
Tokens generated in the response
total_tokens
Sum of prompt + completion — used for billing estimates

Ready to trace your OpenAI agents?

Start free → View demo