Skip to content
Back to docs

§ DOCUMENTATION

n8n Integration

Add Execlave guardrails to your n8n AI workflows in three ways: import a template, ask a copilot, or build by hand.

§ 01

Prerequisites

Execlave API key
Create in Settings → API keys and store in n8n Header Auth as X-API-Key.
Registered agent + n8n workflow
Use the agent UUID from your dashboard and an existing workflow with at least one LLM call.
§ 02

Choose your path

Pick the fastest route for your team. All three end with the same governed architecture.

§ 03

Instant setup with a copilot

Paste this prompt and your current n8n workflow JSON into an AI assistant. It will return a governed version with pre-enforce and post-trace nodes.

§ Copy prompt · paste into your AI coding assistant
I'm using n8n. Modify my workflow so every LLM call passes through Execlave — an AI agent governance platform that runs at the HTTP layer (enforce policies before the call, log traces after). Context for you (the assistant):- Execlave API host: https://api.execlave.com- Auth: header `X-API-Key: <EXECLAVE_API_KEY>` on every request. In n8n this is a "Header Auth" credential.- Two endpoints matter:   POST /api/policies/enforce    body: { agentId: "<UUID>", input: "<user input>", environment: "production", estimatedCost?: number, tools?: string[] }    200 -> { allowed: true, warnings: [...] }          // run the model    403 -> { allowed: false, violations: [...] }       // block, return reason to caller    202 -> { allowed: false, requiresApproval: true, approvalRequestId, pollUrl, violations }  // human gate   POST /api/traces/ingest    body: { traces: [{ traceId, agentId, status: "success"|"error"|"policy_blocked", input, output, modelName, promptTokens, completionTokens, totalTokens, costUsd?, environment }] } How to modify my workflow:1. Find every HTTP Request node that calls an LLM (OpenAI, Anthropic, Gemini, Mistral, etc.) OR every `@n8n/n8n-nodes-langchain.agent` node.2. Insert a new HTTP Request node BEFORE it, named "Execlave: Enforce":   - POST to https://api.execlave.com/api/policies/enforce   - Header Auth credential "Execlave API Key"   - Body (JSON): { "agentId": "<my-agent-uuid>", "input": "<reference the same prompt the LLM node uses>", "environment": "production" }   - Under Options → Response, enable "Never Error" and "Full Response" so we can branch on the status code.3. Insert an IF node after enforce. Condition: `{{ $json.body.allowed }}` is true.   - TRUE branch -> original LLM node.   - FALSE branch -> Respond to Webhook with the reason from `violations[0].message`, HTTP 403.4. AFTER the LLM node, insert another HTTP Request node named "Execlave: Ingest Trace":   - POST to https://api.execlave.com/api/traces/ingest   - Same Header Auth credential   - Body: { "traces": [{ "traceId": "={{ $execution.id }}", "agentId": "<same external id>", "status": "success", "input": "<same user input>", "output": "<LLM response text>", "modelName": "<model name>", "promptTokens": <n>, "completionTokens": <n>, "environment": "production" }] }5. If any enforce call returns 202 with `requiresApproval: true`, that means a policy is in require_approval mode. Add a Wait node (30s) + HTTP GET to `pollUrl` + IF on `$json.body.data.status === "approved"` to proceed or deny.6. Leave my existing happy-path logic intact. Only add Execlave gates and traces around LLM calls. Return the full modified workflow JSON, node-by-node. Do not ask me follow-up questions — make reasonable defaults and note them in comments.
§ 04

Import a template

Start from a proven flow and swap only your credentials, agent ID, and model settings.

Enforce + LLM + Trace

Gate every LLM call with policy enforcement, run the model only if allowed, and log the trace. The canonical governed-workflow shape — use this for 80% of cases.

Approximate nodes: 7

{  "name": "Execlave — Enforce + LLM + Trace",  "nodes": [    {      "parameters": {        "httpMethod": "POST",        "path": "execlave-chat",        "responseMode": "responseNode",        "options": {}      },      "id": "webhook-1",      "name": "Webhook",      "type": "n8n-nodes-base.webhook",      "typeVersion": 2,      "position": [        240,        320      ]    },    {      "parameters": {        "method": "POST",        "url": "https://api.execlave.com/api/policies/enforce",        "authentication": "predefinedCredentialType",        "nodeCredentialType": "httpHeaderAuth",        "sendBody": true,        "bodyContentType": "json",        "jsonBody": "={\n  \"agentId\": \"REPLACE_WITH_AGENT_UUID\",\n  \"input\": \"={{ $json.body.message }}\",\n  \"environment\": \"production\",\n  \"estimatedCost\": 0.02\n}",        "options": {          "response": {            "response": {              "fullResponse": true,              "neverError": true            }          }        }      },      "id": "enforce-1",      "name": "Execlave: Enforce",      "type": "n8n-nodes-base.httpRequest",      "typeVersion": 4.2,      "position": [        460,        320      ],      "credentials": {        "httpHeaderAuth": {          "id": "",          "name": "Execlave API Key"        }      }    },    {      "parameters": {        "conditions": {          "options": {            "caseSensitive": true,            "leftValue": "",            "typeValidation": "strict"          },          "conditions": [            {              "id": "allowed-check",              "leftValue": "={{ $json.body.allowed }}",              "rightValue": true,              "operator": {                "type": "boolean",                "operation": "true",                "singleValue": true              }            }          ],          "combinator": "and"        },        "options": {}      },      "id": "if-allowed-1",      "name": "IF allowed",      "type": "n8n-nodes-base.if",      "typeVersion": 2,      "position": [        680,        320      ]    },    {      "parameters": {        "method": "POST",        "url": "https://api.openai.com/v1/chat/completions",        "authentication": "predefinedCredentialType",        "nodeCredentialType": "openAiApi",        "sendBody": true,        "bodyContentType": "json",        "jsonBody": "={\n  \"model\": \"gpt-4\",\n  \"messages\": [{ \"role\": \"user\", \"content\": \"={{ $('Webhook').item.json.body.message }}\" }]\n}",        "options": {}      },      "id": "openai-1",      "name": "OpenAI",      "type": "n8n-nodes-base.httpRequest",      "typeVersion": 4.2,      "position": [        900,        220      ],      "credentials": {        "openAiApi": {          "id": "",          "name": "OpenAI account"        }      }    },    {      "parameters": {        "method": "POST",        "url": "https://api.execlave.com/api/traces/ingest",        "authentication": "predefinedCredentialType",        "nodeCredentialType": "httpHeaderAuth",        "sendBody": true,        "bodyContentType": "json",        "jsonBody": "={\n  \"traces\": [{\n    \"traceId\": \"={{ $execution.id }}\",\n    \"agentId\": \"my-n8n-agent\",\n    \"status\": \"success\",\n    \"input\": \"={{ $('Webhook').item.json.body.message }}\",\n    \"output\": \"={{ $json.choices[0].message.content }}\",\n    \"modelName\": \"gpt-4\",\n    \"promptTokens\": \"={{ $json.usage.prompt_tokens }}\",\n    \"completionTokens\": \"={{ $json.usage.completion_tokens }}\",\n    \"totalTokens\": \"={{ $json.usage.total_tokens }}\",\n    \"environment\": \"production\"\n  }]\n}",        "options": {}      },      "id": "trace-1",      "name": "Execlave: Ingest Trace",      "type": "n8n-nodes-base.httpRequest",      "typeVersion": 4.2,      "position": [        1120,        220      ],      "credentials": {        "httpHeaderAuth": {          "id": "",          "name": "Execlave API Key"        }      }    },    {      "parameters": {        "respondWith": "json",        "responseBody": "={{ { answer: $('OpenAI').item.json.choices[0].message.content } }}",        "options": {}      },      "id": "respond-ok-1",      "name": "Respond OK",      "type": "n8n-nodes-base.respondToWebhook",      "typeVersion": 1,      "position": [        1340,        220      ]    },    {      "parameters": {        "respondWith": "json",        "responseBody": "={{ { blocked: true, reason: $('Execlave: Enforce').item.json.body.violations[0].message } }}",        "options": {          "responseCode": 403        }      },      "id": "respond-blocked-1",      "name": "Respond Blocked",      "type": "n8n-nodes-base.respondToWebhook",      "typeVersion": 1,      "position": [        900,        440      ]    }  ],  "connections": {    "Webhook": {      "main": [        [          {            "node": "Execlave: Enforce",            "type": "main",            "index": 0          }        ]      ]    },    "Execlave: Enforce": {      "main": [        [          {            "node": "IF allowed",            "type": "main",            "index": 0          }        ]      ]    },    "IF allowed": {      "main": [        [          {            "node": "OpenAI",            "type": "main",            "index": 0          }        ],        [          {            "node": "Respond Blocked",            "type": "main",            "index": 0          }        ]      ]    },    "OpenAI": {      "main": [        [          {            "node": "Execlave: Ingest Trace",            "type": "main",            "index": 0          }        ]      ]    },    "Execlave: Ingest Trace": {      "main": [        [          {            "node": "Respond OK",            "type": "main",            "index": 0          }        ]      ]    }  },  "settings": {    "executionOrder": "v1"  }}

In n8n: Workflows → Import from JSON, paste template, then set credentials and IDs.

§ 05

Build by hand

Use this path when you already have a workflow and only want to add governance nodes around existing LLM steps.

§ Step 01

Create Header Auth credential

In n8n, open Credentials → Add Credential → Header Auth. Set header name to X-API-Key and value to your Execlave key.

§ Step 02

Add Enforce node before the LLM node

Insert an HTTP Request node that posts to https://api.execlave.com/api/policies/enforce with your agent ID and user input.

§ Step 03

Branch on allowed

Add an IF node with condition {{ $json.body.allowed }}. True continues to the LLM node. False returns 403 with the first violation message.

§ Step 04

Ingest trace after model response

Add another HTTP Request node after the LLM call posting to https://api.execlave.com/api/traces/ingest. Include trace ID, model, input/output, and token metadata if available.

§ Step 05

Choose outage behavior

Decide fail-open or fail-closed for Execlave node outages by configuring n8n node error handling and alerting policy.

§ 06

Using n8n AI Agent node

For @n8n/n8n-nodes-langchain.agent flows, enforce before the agent and trace after agent output.

Keep this shape: Chat Trigger → Enforce → IF allowed → AI Agent → Ingest Trace.

{  "agentId": "REPLACE_WITH_AGENT_UUID",  "input": "={{ $json.chatInput }}",  "environment": "production",  "tools": ["web_search", "calculator"]}
§ 07

Human approval loop

For policies in require_approval mode, wait and poll until an explicit human decision is recorded.

When enforcement returns requiresApproval: true, use a Wait node, then poll the approval endpoint from pollUrl, and branch on data.status equals approved.

{  "step1": "POST /api/policies/enforce",  "expect202": {    "requiresApproval": true,    "approvalRequestId": "apr_123",    "pollUrl": "/api/approvals/apr_123"  },  "step2": "Wait 30s",  "step3": "GET {{ pollUrl }}",  "branch": {    "approved": "execute action + ingest trace",    "denied": "stop action + respond with reason"  }}
§ 08

Enforcement modes

ModeBehaviorWhen to use
blockStops execution immediately on violation.Production flows with strict compliance requirements.
warnAllows execution but returns warnings.Progressive rollout before hard blocking.
monitorLogs policy hits without interrupting flow.Baseline learning and false-positive tuning.
require_approvalPauses high-risk actions until a human decides.Refunds, outbound messaging, data mutations.
§ 09

Error handling & troubleshooting

Enforcement unreachable

Configure explicit fail-open or fail-closed behavior. If fail-open, send an alert so skipped guardrails are visible.

401 unauthorized

Verify credential points to the correct key and that the key is active for this environment.

403 blocked

Surface the first violation in the response payload: $json.body.violations[0].message.

429 rate limit

Add a Wait node and retry with exponential backoff for bursty workflows.

Trace validation errors

Ensure status is valid and token/cost fields are numeric. Keep trace batches at 100 or fewer entries.

No traces in dashboard

Confirm the ingest node runs after each LLM call and that agentId matches your registered agent.

§ 10

Pre-launch checklist

Use a staging API key first
Run policies in monitor mode for baseline
Flip critical rules to block or require_approval
Confirm traces appear on dashboard timelines
Verify approval flows end in approved/denied branches
Rotate to production key before go-live
§ 11

What's next