Subprocessors
Last Updated: April 2026
This page lists the third-party service providers ("subprocessors") that Execlave engages to process personal data on behalf of our customers. We maintain this list in accordance with our commitments under applicable data protection.
We will notify customers at least 30 days before adding a new subprocessor to this list, allowing time to review and raise any concerns.
Infrastructure Subprocessors
These subprocessors provide core infrastructure services for the Execlave platform.
| Vendor | Purpose | Data Categories | Location |
|---|---|---|---|
| Amazon Web Services (AWS) | Cloud infrastructure — compute, database (PostgreSQL/TimescaleDB), storage, networking, caching (Redis) | All Customer Data (encrypted at rest and in transit) | Configurable (US, EU, or other regions based on customer preference) |
Authentication Subprocessors
| Vendor | Purpose | Data Categories | Location |
|---|---|---|---|
| Clerk | User authentication, SSO, session management, and user profile storage | User names, email addresses, profile images, authentication tokens, session data | United States |
Payment Subprocessors
| Vendor | Purpose | Data Categories | Location |
|---|---|---|---|
| Stripe | Payment processing, subscription management, invoicing | Billing contact information, payment card details (not stored by Execlave), subscription status | United States |
AI/ML Processing
Execlave does not send any AI agent execution data to external AI APIs.
Semantic policy evaluation (when semantic_check_enabled = true) is performed entirely using local LLM models deployed within the same infrastructure as the Execlave backend. Agent inputs, action types, and conversation context never leave the deployment environment for AI processing.
For self-hosted deployments, customers control the LLM infrastructure entirely. For managed deployments, Execlave operates the LLM service within the same region as the rest of the platform infrastructure.
Integration Subprocessors
Note: These subprocessors are only engaged when customers configure the corresponding integrations.
| Vendor | Purpose | Data Categories | Location |
|---|---|---|---|
| Slack Technologies | Approval workflow notifications, human-in-the-loop approval actions (when Slack integration is enabled) | Approval request summaries, user identifiers, approval decisions | United States |
Analytics Subprocessors (Optional)
Note: This subprocessor is only active when deployment administrators configure product analytics. Many deployments do not enable this feature.
| Vendor | Purpose | Data Categories | Location |
|---|---|---|---|
| PostHog | Product analytics and usage metrics (when NEXT_PUBLIC_POSTHOG_KEY is configured) | User identifiers, page views, feature usage events, organization context | United States (or EU, depending on PostHog deployment) |
Self-Hosted Deployments
For customers using self-hosted deployments, the subprocessor landscape differs:
- AWS is replaced by the customer's own infrastructure provider.
- AI/ML processing runs entirely on local LLM models within your infrastructure — no external AI APIs are used.
- PostHog can be disabled or replaced with a self-hosted PostHog instance.
- Clerk and Stripe remain as subprocessors unless alternative authentication and billing systems are implemented.
Contact us at support@execlave.com for information about subprocessor configurations for self-hosted deployments.
Change Notification Process
When we add a new subprocessor:
- We update this page with the new subprocessor information.
- We notify customers via email to the Organization Owner's registered email address at least 30 days before the new subprocessor begins processing data.
- Customers with concerns may contact us to discuss alternatives or exercise their rights under the appropriate laws.
Contact
For questions about our subprocessors or data processing practices:
Email: support@execlave.com
Legal: security@execlave.com
