Prerequisites
http://localhost:4000Choose Your SDK
Instant Setup with AI Assistant
Copy this prompt and paste it into GitHub Copilot, Cursor, or any AI coding assistant. It will set up Execlave in your project automatically.
Step-by-Step Guide
Install the SDK
Install the Execlave SDK package (zero runtime dependencies):
npm install @execlave/sdkGet your API Key
Go to Dashboard → Settings → API Keys and generate a new key. Copy the key — it starts with ag_prod_ or ag_test_.
Add it to your environment:
# .env\nEXECLAVE_API_KEY=exe_prod_your_key_hereInitialize the SDK
Create a shared client instance:
// lib/execlave.ts
import { Execlave } from '@execlave/sdk';
export const ag = new Execlave({
apiKey: process.env.EXECLAVE_API_KEY!,
baseUrl: 'http://localhost:4000', // your Execlave API
environment: process.env.NODE_ENV || 'development',
});Register your Agent
Register your AI agent on startup. This is idempotent — safe to call every time your app starts.
import { ag } from './lib/execlave';
// Call once during app initialization
await ag.registerAgent({
agentId: 'my-chatbot', // unique identifier for your agent
name: 'Customer Support Bot', // display name in dashboard
type: 'chatbot', // chatbot | copilot | autonomous | workflow
platform: 'custom', // custom | openai | anthropic | langchain
});Trace your first LLM call
Wrap your LLM calls to capture inputs, outputs, tokens, and cost:
import { ag } from './lib/execlave';
async function handleMessage(userMessage: string) {
const trace = ag.startTrace({
agentId: 'my-chatbot',
sessionId: 'user-session-123', // optional: groups a conversation
});
trace.setInput(userMessage);
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userMessage }],
});
const answer = response.choices[0].message.content;
trace
.setOutput(answer)
.setModel('gpt-4')
.setTokens(response.usage?.prompt_tokens, response.usage?.completion_tokens)
.setCost(0.03)
.finish();
return answer;
}
// Or use the simpler wrap() helper:
const tracedCall = ag.wrap(
async (input: string) => {
const res = await openai.chat.completions.create({ ... });
return res.choices[0].message.content;
},
{ agentId: 'my-chatbot' }
);Verify in the Dashboard
Open the Execlave dashboard and check:
You should see your agent registered and traces flowing in.
What's Next?
Error Handling
The SDK exports specific error types so you can handle governance events gracefully:
import { AgentPausedError, PolicyBlockedError } from '@execlave/sdk';
try {
const result = await tracedCall('Process this order');
} catch (err) {
if (err instanceof AgentPausedError) {
// Admin paused this agent via kill switch
return 'Service temporarily unavailable.';
}
if (err instanceof PolicyBlockedError) {
// A blocking policy was violated
return 'This request was blocked by security policy.';
}
throw err; // re-throw unexpected errors
}SDK Configuration
| Option | Default | Description |
|---|---|---|
| apiKey | EXECLAVE_API_KEY env | Your API key (exe_prod_xxx or exe_test_xxx) |
| baseUrl | http://localhost:4000 | Execlave API URL |
| environment | production | Deployment environment label |
| asyncMode | true | Buffer traces and flush in background |
| batchSize | 100 | Max traces per flush batch |
| flushIntervalMs | 10000 | Background flush interval (ms) |
| debug | false | Enable verbose debug logging |
| enableControlChannel | true | Poll for kill-switch status changes |
| pollIntervalMs | 15000 | Status polling interval (ms) |
| enableInjectionScan | true | Scan inputs for prompt injection patterns |
| enforcementOnOutage | fail_open | Behavior when API is unreachable (fail_open or fail_closed) |
Key API Endpoints
The SDK handles these automatically, but here they are for direct API usage:
| Endpoint | Auth | Description |
|---|---|---|
| POST /api/agents | X-API-Key | Register an agent |
| POST /api/traces/ingest | X-API-Key | Submit traces (up to 100 per batch) |
| POST /api/policies | X-API-Key (admin) | Create a governance policy |
| POST /api/policies/enforce | X-API-Key | Pre-execution policy check |
| PATCH /api/agents/:id/pause | X-API-Key (admin) | Pause agent (kill switch) |
| PATCH /api/agents/:id/resume | X-API-Key (admin) | Resume paused agent |
| GET /api/agents/:id/status | X-API-Key | Check agent status |
| POST /api/agents/:id/grants | X-API-Key (admin) | Create agent-to-agent access grant |
| POST /api/agents/authorize | X-API-Key | Check if agent call is authorized |
Environment Variables
Your application only needs one environment variable:
EXECLAVE_API_KEY=exe_prod_your_key_hereIf you're running Execlave infrastructure locally, use these defaults:
# Backend
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/execlave
REDIS_URL=redis://localhost:6379
PORT=4000
# Frontend
NEXT_PUBLIC_API_URL=http://localhost:4000
NEXT_PUBLIC_GRAPHQL_URL=http://localhost:4000/graphql
# Start infrastructure
docker compose up -d # PostgreSQL + Redis + MinIO
cd backend && npm run dev # API server on :4000
cd frontend && npm run dev # Dashboard on :3000Self-hosted deployment
Run the full Execlave platform on your own infrastructure with a license key. Your data never leaves your network. Only a 24-hour license heartbeat (fingerprint + timestamp, no customer data) crosses the boundary.
Quickstart
curl -O https://get.execlave.com/docker-compose.yml
export LICENSE_KEY=exe_lic_<your-key>
docker compose up -d
open http://localhost:3000Get a license key
Request one at /get-license. Free tier licenses arrive within one business hour. For the complete self-hosted guide — environment variables, license semantics, backups, air-gapped mode, troubleshooting — see the full self-hosted.md guide on GitHub.
