Skip to content
Back to Docs

Getting Started

Add AI agent governance to your application in under 5 minutes.

Prerequisites

Execlave Backend
Running on http://localhost:4000
API Key
Generated from Settings → API Keys

Choose Your SDK

Instant Setup with AI Assistant

Copy this prompt and paste it into GitHub Copilot, Cursor, or any AI coding assistant. It will set up Execlave in your project automatically.

Copy this prompt to your AI coding assistant
Integrate Execlave into my project using the JavaScript/TypeScript SDK. Here's what you need to do: 1. Install the SDK: npm install @execlave/sdk 2. Add this environment variable to my .env file: EXECLAVE_API_KEY=<my-api-key> 3. Create a file called "execlave.ts" (or .js) in my project's lib/ or utils/ folder with this initialization code: import { Execlave, AgentPausedError, PolicyBlockedError } from '@execlave/sdk'; export const ag = new Execlave({ apiKey: process.env.EXECLAVE_API_KEY!, baseUrl: process.env.EXECLAVE_BASE_URL || 'http://localhost:4000', environment: process.env.NODE_ENV || 'development', debug: process.env.NODE_ENV !== 'production', }); // Register agent on startup (idempotent — safe to call multiple times) export async function initAgent() { return ag.registerAgent({ agentId: 'my-agent', name: 'My AI Agent', type: 'chatbot', platform: 'custom', }); } // Wrap any LLM call with automatic tracing export function traceCall<T>(fn: (input: string) => Promise<T>, agentId = 'my-agent') { return ag.wrap(fn, { agentId }); } // Export error types for catch blocks export { AgentPausedError, PolicyBlockedError }; 4. In my main application entry point (e.g. index.ts, app.ts, or server.ts), call initAgent(): import { initAgent } from './lib/execlave'; await initAgent(); 5. Wrap my LLM calls with tracing. For example: import { ag } from './lib/execlave'; async function handleUserMessage(userMessage: string) { const trace = ag.startTrace({ agentId: 'my-agent' }); trace.setInput(userMessage); try { const response = await myLLM.call(userMessage); trace.setOutput(response).setModel('gpt-4').finish(); return response; } catch (error) { trace.finish('error', String(error)); throw error; } } 6. Add graceful shutdown to catch process exit: process.on('SIGTERM', async () => { await ag.shutdown(); process.exit(0); }); The SDK has zero runtime dependencies and automatically: - Buffers and batch-flushes traces in the background - Detects prompt injection attempts client-side - Respects kill-switch (throws AgentPausedError if agent is paused) - Caches policy enforcement decisions (60s TTL) - Implements circuit breaker (3 failures → 60s backoff)
or follow the manual steps below

Step-by-Step Guide

1

Install the SDK

Install the Execlave SDK package (zero runtime dependencies):

npm install @execlave/sdk
2

Get your API Key

Go to Dashboard → Settings → API Keys and generate a new key. Copy the key — it starts with ag_prod_ or ag_test_.

Add it to your environment:

# .env\nEXECLAVE_API_KEY=exe_prod_your_key_here
3

Initialize the SDK

Create a shared client instance:

// lib/execlave.ts
import { Execlave } from '@execlave/sdk';

export const ag = new Execlave({
  apiKey: process.env.EXECLAVE_API_KEY!,
  baseUrl: 'http://localhost:4000',  // your Execlave API
  environment: process.env.NODE_ENV || 'development',
});
4

Register your Agent

Register your AI agent on startup. This is idempotent — safe to call every time your app starts.

import { ag } from './lib/execlave';

// Call once during app initialization
await ag.registerAgent({
  agentId: 'my-chatbot',         // unique identifier for your agent
  name: 'Customer Support Bot',   // display name in dashboard
  type: 'chatbot',                // chatbot | copilot | autonomous | workflow
  platform: 'custom',             // custom | openai | anthropic | langchain
});
5

Trace your first LLM call

Wrap your LLM calls to capture inputs, outputs, tokens, and cost:

import { ag } from './lib/execlave';

async function handleMessage(userMessage: string) {
  const trace = ag.startTrace({
    agentId: 'my-chatbot',
    sessionId: 'user-session-123',  // optional: groups a conversation
  });
  trace.setInput(userMessage);

  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [{ role: 'user', content: userMessage }],
  });

  const answer = response.choices[0].message.content;
  trace
    .setOutput(answer)
    .setModel('gpt-4')
    .setTokens(response.usage?.prompt_tokens, response.usage?.completion_tokens)
    .setCost(0.03)
    .finish();

  return answer;
}

// Or use the simpler wrap() helper:
const tracedCall = ag.wrap(
  async (input: string) => {
    const res = await openai.chat.completions.create({ ... });
    return res.choices[0].message.content;
  },
  { agentId: 'my-chatbot' }
);
6

Verify in the Dashboard

Open the Execlave dashboard and check:

You should see your agent registered and traces flowing in.

Error Handling

The SDK exports specific error types so you can handle governance events gracefully:

import { AgentPausedError, PolicyBlockedError } from '@execlave/sdk';

try {
  const result = await tracedCall('Process this order');
} catch (err) {
  if (err instanceof AgentPausedError) {
    // Admin paused this agent via kill switch
    return 'Service temporarily unavailable.';
  }
  if (err instanceof PolicyBlockedError) {
    // A blocking policy was violated
    return 'This request was blocked by security policy.';
  }
  throw err; // re-throw unexpected errors
}

SDK Configuration

OptionDefaultDescription
apiKeyEXECLAVE_API_KEY envYour API key (exe_prod_xxx or exe_test_xxx)
baseUrlhttp://localhost:4000Execlave API URL
environmentproductionDeployment environment label
asyncModetrueBuffer traces and flush in background
batchSize100Max traces per flush batch
flushIntervalMs10000Background flush interval (ms)
debugfalseEnable verbose debug logging
enableControlChanneltruePoll for kill-switch status changes
pollIntervalMs15000Status polling interval (ms)
enableInjectionScantrueScan inputs for prompt injection patterns
enforcementOnOutagefail_openBehavior when API is unreachable (fail_open or fail_closed)

Key API Endpoints

The SDK handles these automatically, but here they are for direct API usage:

EndpointAuthDescription
POST /api/agentsX-API-KeyRegister an agent
POST /api/traces/ingestX-API-KeySubmit traces (up to 100 per batch)
POST /api/policiesX-API-Key (admin)Create a governance policy
POST /api/policies/enforceX-API-KeyPre-execution policy check
PATCH /api/agents/:id/pauseX-API-Key (admin)Pause agent (kill switch)
PATCH /api/agents/:id/resumeX-API-Key (admin)Resume paused agent
GET /api/agents/:id/statusX-API-KeyCheck agent status
POST /api/agents/:id/grantsX-API-Key (admin)Create agent-to-agent access grant
POST /api/agents/authorizeX-API-KeyCheck if agent call is authorized

Environment Variables

Your application only needs one environment variable:

EXECLAVE_API_KEY=exe_prod_your_key_here

If you're running Execlave infrastructure locally, use these defaults:

# Backend
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/execlave
REDIS_URL=redis://localhost:6379
PORT=4000

# Frontend
NEXT_PUBLIC_API_URL=http://localhost:4000
NEXT_PUBLIC_GRAPHQL_URL=http://localhost:4000/graphql

# Start infrastructure
docker compose up -d        # PostgreSQL + Redis + MinIO
cd backend && npm run dev   # API server on :4000
cd frontend && npm run dev  # Dashboard on :3000

Self-hosted deployment

Run the full Execlave platform on your own infrastructure with a license key. Your data never leaves your network. Only a 24-hour license heartbeat (fingerprint + timestamp, no customer data) crosses the boundary.

Quickstart

curl -O https://get.execlave.com/docker-compose.yml
export LICENSE_KEY=exe_lic_<your-key>
docker compose up -d
open http://localhost:3000

Get a license key

Request one at /get-license. Free tier licenses arrive within one business hour. For the complete self-hosted guide — environment variables, license semantics, backups, air-gapped mode, troubleshooting — see the full self-hosted.md guide on GitHub.