Skip to content
Docs home

§ COMPLIANCE · EU AI ACT

EU AI Act compliance with Execlave

Article-by-article mapping of Execlave controls to the EU AI Act. Not legal advice — an engineering reference for teams that need evidence an auditor will accept.

§ 01

Scope

Who the Act applies to and what Execlave takes off your plate.

Regulation (EU) 2024/1689 creates a risk-tiered framework for AI systems placed on the EU market. High-risk systems (Annex III) carry the heaviest obligations: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness, cybersecurity, and post-market monitoring. Providers of general-purpose AI models face a separate set of obligations that began 2 August 2025.

This page is a practical engineering reference. It pairs the articles most operators actually care about with the Execlave control that produces the evidence an auditor will ask for.

§ 02

Article-by-article mapping

One card per article. Where an article has multiple sub-clauses, we cover the ones that are implementable at runtime.

Article 9 — Risk management

  • 12 policy types covering prompt injection, data access, cost, quality, tools.
  • Four enforcement modes — monitor, warn, require_approval, block.
  • Incident tracking with severity, timeline, and resolution workflow.
  • Budget enforcement at the agent and organisation level.

Article 10 — Data governance

  • SDK PII scrubbing across 14 categories before data leaves your process.
  • Input sanitisation middleware on every ingest endpoint.
  • EU data residency on enterprise tier — storage and processing inside the EU.

Article 12 — Record-keeping

  • Every agent execution traced with input, output, model, tokens, cost, latency.
  • Append-only audit log; UPDATE/DELETE blocked at DB trigger level.
  • Hash-chained entries — tampering is detectable offline.
  • Retention configurable up to 10 years (Article 19 aligned).

Article 13 — Transparency

  • Every enforcement decision carries a human-readable reason and rule IDs.
  • Agent cards expose capabilities, tools, and policy summary.

Article 14 — Human oversight

  • Kill switch from dashboard or Slack.
  • require_approval mode halts execution until a human decides.
  • Approval expiry (default 30m) prevents indefinitely hung requests.
  • Real-time dashboard view of every pending decision.

Article 15 — Accuracy, robustness, cybersecurity

  • Client-side and server-side prompt-injection scanning with severity scoring.
  • Semantic enforcement tiers beyond pattern matching.
  • Quality thresholds enforced per policy.
  • Circuit breaker in both SDKs; PostgreSQL RLS for tenant isolation.

Article 17 — Quality management

  • Policy versioning with before/after audit entries.
  • Prompt versioning captured in the SDK for every trace.
  • Deployment tracking as a first-class entity with approval trail.

Article 19 — Automatically generated logs

  • Default 1-year retention; 10 years on enterprise.
  • CSV and JSON exports for any window.
  • RSA-SHA256 signed PDF/HTML reports for regulator-facing disclosure.

Article 26 — Deployer obligations

  • Per-agent request volume, token, and cost metering.
  • EWMA anomaly detection with seasonal decomposition.
  • Webhook / Slack alerts on configurable thresholds.

Article 50 — Transparency to end users

  • SDK emits a provenance header on every response for downstream watermarking.
  • Agent identity surfaced for user-facing disclosure banners.
§ 03

Generating an EU AI Act report

The compliance export endpoint produces a signed, time-bounded report mapping evidence back to each Article.

Signed compliance report request

POST /api/v1/compliance/reportAuthorization: Bearer exe_live_...Content-Type: application/json {  "from": "2026-01-01",  "to": "2026-03-31",  "frameworks": ["eu_ai_act"],  "format": "html",  "sign": true}

The response includes a signature (RSA-SHA256 over the canonical payload), a publicKeyId, and a download URL. Attach the signature and the public key (published at /.well-known/report-keys.json) to your regulator package — a third party can verify the report offline without contacting Execlave.

§ 04

Deadlines

The dates that belong on your roadmap.

DateApplies toWhat to have ready
2 February 2025Any AI system placed on the EU marketProhibited-use controls (social scoring, manipulative profiling, predictive policing, untargeted biometric scraping, real-time public biometric ID) plus AI literacy obligations.
2 August 2025Providers of general-purpose AI modelsTechnical documentation, training-data summary, downstream-operator cooperation, copyright-compliance policy. GPAI models already on the market by this date have until 2 Aug 2027.
2 August 2026Providers and deployers of high-risk AI systems under Annex IIIFull Article 9–17 stack: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, cybersecurity, post-market monitoring. Plan against this date.
2 August 2027High-risk AI that is a safety component of a regulated product under Annex I (medical devices, machinery, IVDs, etc.)Conformity assessment aligned with the existing product-safety regime, plus the full high-risk stack. Pre-existing GPAI models also reach their compliance deadline on this date.
§ 05

Engineering checklist

A pragmatic subset to tackle first. Execlave ships each of the below out of the box.

Eight controls that move the needle

  • Every tool call passes through a policy engine with a recorded decision.
  • Every override approval is tied to a human identity and a timestamp.
  • Audit logs are append-only and retained ≥ 1 year, ideally 10.
  • PII is hashed at the boundary — never stored in the trace body.
  • A kill switch can stop any agent within seconds.
  • Every released agent has a prompt version, a policy version, a deployment record.
  • Compliance reports are generated on demand and verifiable offline.
  • An SBOM is produced for every SDK release and archived for 10 years.

If you build this yourself, budget 3–6 months and a dedicated engineer.