§ INTEGRATION · LANGCHAIN
LangChain callback handler
Install the optional extra, attach the handler to your chain's callbacks, and every LLM call, tool invocation, retriever lookup, and agent step is governed and traced — Python and TypeScript.
Prerequisites
- Execlave account with an active API key — Settings → API keys.
- A registered agent in Execlave. Use the agent's lowercase-kebab
agentIdin the handler. - LangChain
>=0.3(Python) or@langchain/core >=0.3(JS).
Install
The handler is behind an optional extra so the default SDK install does not pull LangChain as a transitive dependency.
Python
pip install 'execlave-sdk[langchain]'JavaScript / TypeScript
npm install @execlave/sdk @langchain/coreInstant setup with a copilot
Paste this prompt into your AI coding assistant (Cursor, Claude Code, ChatGPT, Copilot). It will wire Execlave into your existing LangChain code without rewriting your chains.
You are adding Execlave (an AI agent governance platform) to an existing LangChain application. Do not rewrite existing chains — only wire the callback handler. Rules you MUST follow:1. Install the correct extra. Python: `pip install 'execlave-sdk[langchain]'`. JS/TS: `npm install @execlave/sdk @langchain/core`.2. Read `EXECLAVE_API_KEY` from environment variables. Never hardcode keys.3. Create exactly ONE Execlave client per process and reuse it. Python: `exe = Execlave(api_key=os.environ["EXECLAVE_API_KEY"])`. JS: `const exe = new Execlave({ apiKey: process.env.EXECLAVE_API_KEY! })`.4. Create an `ExeclaveCallbackHandler` per logical agent (stable `agent_id`/`agentId` string, lowercase-kebab). Do NOT create a new handler per request.5. Attach the handler via the `callbacks` field on the invoke/stream config — do not replace the user's existing callbacks, append to them.6. Import path: Python `from execlave.integrations.langchain import ExeclaveCallbackHandler`. JS `import { ExeclaveCallbackHandler } from '@execlave/sdk/integrations/langchain'`.7. Wrap `chain.invoke` / `chain.stream` in try/except for `execlave.errors.PolicyBlockedError` (Python) or `PolicyBlockedError` from `@execlave/sdk` (JS). On block, return a structured 4xx response containing `exc.violations` (list of { policyId, message, severity }). Do NOT swallow the error silently.8. Do NOT call `exe.enforce_policy` manually before invoking the chain — the handler already runs enforcement on chain start and every tool call.9. Do NOT use the deprecated `run_langchain()` helper. It is removed in execlave-sdk 2.0.10. On process shutdown, call `exe.flush()` (sync) or `await exe.shutdown()` (JS) so in-flight spans are not dropped. Deliverables:- Add the install command to the project's package manifest (requirements.txt / pyproject / package.json).- Add env vars `EXECLAVE_API_KEY` and optional `EXECLAVE_BASE_URL` to `.env.example`.- Show me a single diff per file. Do not modify unrelated code. Reference: https://www.execlave.com/docs/integrations/langchainAPI reference: https://www.execlave.com/docs/sdk-referenceQuick start
One handler wires enforcement and tracing into every callback LangChain emits. No decorators, no wrapper functions.
Python
from execlave import Execlavefrom execlave.integrations.langchain import ExeclaveCallbackHandlerfrom langchain_openai import ChatOpenAIfrom langchain_core.prompts import ChatPromptTemplate exe = Execlave(api_key="exe_live_...")handler = ExeclaveCallbackHandler(exe, agent_id="support-bot") prompt = ChatPromptTemplate.from_messages([("user", "{question}")])llm = ChatOpenAI(model="gpt-4o-mini")chain = prompt | llm # Enforcement fires on the chain's top-level input and on every tool call.# A PolicyBlockedError short-circuits the chain before the LLM is called.answer = chain.invoke( {"question": "Summarize our Q3 pipeline"}, config={"callbacks": [handler]},)JavaScript / TypeScript
import { Execlave } from '@execlave/sdk';import { ExeclaveCallbackHandler } from '@execlave/sdk/integrations/langchain';import { ChatOpenAI } from '@langchain/openai';import { ChatPromptTemplate } from '@langchain/core/prompts'; const exe = new Execlave({ apiKey: process.env.EXECLAVE_API_KEY! });const handler = new ExeclaveCallbackHandler(exe, { agentId: 'support-bot' }); const prompt = ChatPromptTemplate.fromMessages([['user', '{question}']]);const chain = prompt.pipe(new ChatOpenAI({ model: 'gpt-4o-mini' })); const answer = await chain.invoke( { question: 'Summarize our Q3 pipeline' }, { callbacks: [handler] },);What gets instrumented
Every LangChain callback maps to a span under the same trace id, so you get the full chain tree in one view.
| Callback | Span kind | Notes |
|---|---|---|
on_chain_start / handleChainStart | chain | Top-level chain inputs are pushed through pre-execution enforcement. |
on_llm_start / handleLLMStart | llm | Captures model name and completion input. |
on_chat_model_start / handleChatModelStart | llm | Captures structured chat messages, preserves role metadata. |
on_tool_start / handleToolStart | tool | Every tool call enforces with the tool name added to the allowlist check. |
on_agent_action / handleAgentAction | agent | Records a planning step from an AgentExecutor. |
on_retriever_start / handleRetrieverStart | retriever | Records a RAG lookup. Page content is never forwarded upstream. |
*_error | — | Span closes as error, parent trace status becomes error. |
Handling blocks
Enforcement runs on the chain's top-level input and on every tool call. A blocked call raises PolicyBlockedError with the exact violations list; transient enforcement errors are logged and the chain proceeds (fail-open).
Python
from execlave.errors import PolicyBlockedError try: chain.invoke({"question": user_input}, config={"callbacks": [handler]})except PolicyBlockedError as exc: # Policy blocked the request before the LLM was called. # exc.violations contains the matched policies. return {"error": "blocked", "violations": exc.violations}JavaScript / TypeScript
import { PolicyBlockedError } from '@execlave/sdk'; try { await chain.invoke({ question: userInput }, { callbacks: [handler] });} catch (err) { if (err instanceof PolicyBlockedError) { return { error: 'blocked', violations: err.violations }; } throw err;}Migration from run_langchain()
run_langchain() is deprecated. For agent-shaped workflows use the callback handler — it captures tool calls, retriever hits, and nested chain spans that the helper cannot see.
When to use which
ExeclaveCallbackHandler— agents, tool use, retrievers, LCEL chains with branching, anything with child runs.run_langchain()— single invoke, no tools, no retriever. Emits a DeprecationWarning; will be removed in execlave-sdk 2.0.