Deployment Modes
This guide helps you choose the right deployment mode and understand the trade-offs between serverless and self-hosted options.
Ready to deploy?
Once you've chosen a mode, see the Deployment Guide for detailed step-by-step instructions.
Connect your development tools
After deploying, see the Integration Guide to connect tools like OpenCode and Aider to your agent.
Lucid supports three deployment modes:
| Mode | Interface | Infrastructure | Best For |
|---|---|---|---|
| Serverless (Lucid-Managed) | CLI or Observer GUI | Lucid shared pools | Quick start, PoC, instant deployment |
| Self-Hosted | CLI | Your K8s cluster | Enterprise, full control |
| External Provider | CLI or Observer GUI | Provider (Anthropic/OpenAI) | Access to frontier models |
Serverless and self-hosted modes provide identical TEE security guarantees. External provider mode trades hardware attestation for access to proprietary models while retaining full auditor and Cedar policy enforcement.
Serverless Mode (Recommended)
The fastest way to deploy AI workloads with TEE security guarantees. No infrastructure provisioning — deploy in seconds.
How It Works
flowchart LR
subgraph You["Your Side"]
CLI["lucid apply<br/>--model --profile"]
App["Your Application"]
end
subgraph Lucid["Lucid Platform"]
Config["Environment<br/>Config"]
subgraph Pools["Shared Pools (TEE)"]
M["🧠 Models"]
A["🛡️ Auditors"]
end
end
CLI --> Config
Config --> Pools
App -->|"Direct TLS"| Pools
Key properties: - Instant deployment - No infrastructure provisioning needed - Same TEE security - Hardware attestation identical to self-hosted - Zero-trust - You can verify attestation directly against Intel/AMD root of trust - Data isolation - TEE memory isolation ensures Lucid never sees your plaintext data - Automatic GPU optimization - Significant inference cost savings via batch tuning and quantization
How Serverless Routing Works
When you deploy to serverless, the system creates an environment configuration, assigns it to available TEE resources matching your requirements (model, region, data residency), and provides routing endpoints. Your application connects directly to TEE endpoints over TLS — Lucid infrastructure is never in the data path.
You can verify TEE attestation client-side against the hardware vendor's root of trust using lucid verify.
CLI Usage
# Deploy with model and auditor profile
lucid apply --model llama-3.1-8b --profile chat
# Deploy with data residency requirement
lucid apply --model qwen-72b --profile coding --region eu
# Browse available resources
lucid catalog models
lucid catalog auditors
# Verify TEE attestation (client-side, against hardware root of trust)
lucid verify endpoint https://env-abc123.serverless.lucid.ai
Observer GUI Usage
- Open Observer and select "Deploy"
- Choose "Serverless" (default)
- Select your model and auditor profile from the catalog
- Click Deploy — your environment is ready instantly
External Provider Mode
Use frontier models from Anthropic and OpenAI while retaining Lucid's auditor enforcement and Cedar policy evaluation. The model runs on the provider's infrastructure (not in a TEE), so hardware attestation is not available.
How It Works
flowchart LR
subgraph You["Your Side"]
CLI["lucid registry add-key"]
Code["PlatformModel('my-agent')"]
end
subgraph Lucid["Lucid Platform"]
GW["Verifier"]
Auditors["Auditors (TEE)"]
Cedar["Cedar Policies"]
end
subgraph Provider["External Provider"]
Model["Claude / GPT"]
end
CLI --> GW
Code -->|"HTTPS"| GW
GW --> Cedar
Cedar --> Auditors
Auditors -->|"Proxy"| Model
Setup
-
Register a provider API key:
lucid registry add-key --provider anthropic --name "Production" --key sk-ant-... -
Create an agent using an external model (via Observer UI or CLI).
-
Call the agent via the API (using the Lucid SDK):
from lucid_sdk import AuthenticatedClient client = AuthenticatedClient( base_url="https://api.us-east-1.lucid.ai", token="your-api-key", )
Security Trade-offs
| Property | TEE Model | External Model |
|---|---|---|
| Computation confidentiality | Hardware-encrypted | Provider sees all data |
| Hardware attestation | Yes | No |
| Auditor enforcement | Yes | Yes |
| Cedar policy evaluation | Yes | Yes |
| Signed audit trail | Hardware-rooted | Lucid-signed |
| Passport status | VERIFIED | PARTIALLY VERIFIED |
Supported Providers
Anthropic: Claude Opus 4, Claude Sonnet 4, Claude Haiku 3.5
OpenAI: GPT-4o, GPT-4o mini, o3, o4-mini
The Shared Contract: LucidEnvironment
Both modes use the LucidEnvironment CRD format as their configuration contract:
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: my-platform
spec:
infrastructure:
provider: gcp
region: us-central1
# ...
agents:
- name: my-agent
model:
id: meta-llama/Llama-3.3-70B
# ...
services:
observability:
enabled: true
This format captures everything needed for a complete deployment: - Infrastructure: Cloud provider, region, cluster configuration, node pools - Agents: LLM deployments with models, GPUs, and audit chains - Services: Observability, gateway, vector database
Self-Hosted Deployment (CLI)
Use the CLI when you want full control over your infrastructure.
Prerequisites
- Kubernetes cluster (GKE, EKS, AKS, or local)
kubectlconfigured- Lucid CLI installed (available to alpha participants)
Workflow
# 1. Authenticate
lucid login
# 2. Write your environment configuration
cat > my-env.yaml << 'EOF'
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: prod-agents
spec:
infrastructure:
provider: gcp
region: us-central1
projectId: my-project
cluster:
name: lucid-cluster
agents:
- name: assistant
model:
id: meta-llama/Llama-3.3-70B
gpu:
type: H100
memory: 80GB
EOF
# 3. Preview what will be deployed
lucid diff -f my-env.yaml
# 4. Deploy
lucid apply -f my-env.yaml
What apply Does
The lucid apply command orchestrates the full deployment:
- Infrastructure Provisioning (if
provider!=local) - Creates cloud resources (GKE/EKS/AKS cluster)
-
Configures networking, node pools, GPUs
-
Cluster Setup (if operator not installed)
- Installs Lucid operator
-
Configures RBAC, webhooks
-
Agent Deployment
- Creates agents via Verifier API
- Configures audit chains
Flags
| Flag | Description |
|---|---|
--skip-infra |
Skip infrastructure provisioning (use existing cluster) |
-y, --yes |
Skip confirmation prompts |
--managed |
Use Lucid managed deployment |
Local Development
For local development, you need a local Kubernetes cluster. You can use kind, minikube, or Docker Desktop with Kubernetes enabled.
# Option 1: Create a cluster with kind
kind create cluster --name lucid-dev
# Option 2: Create a cluster with minikube
minikube start --driver=docker
# Option 3: Enable Kubernetes in Docker Desktop settings
# (No CLI command needed - use the Docker Desktop UI)
Once your cluster is running, deploy your environment:
# Verify cluster is accessible
kubectl cluster-info
# Deploy environment
lucid apply -f my-env.yaml -y
# Check status
lucid status
# View logs
lucid logs my-agent
# Teardown
lucid teardown
To clean up the local cluster when done:
# For kind
kind delete cluster --name lucid-dev
# For minikube
minikube delete
The operator URL is auto-detected from LUCID_OPERATOR_URL env var or defaults to localhost:8443.
Lucid-Managed Deployment (Observer GUI)
Use the Observer GUI when you want Lucid to handle infrastructure.
Workflow
- Open the Deployment Wizard in Observer
- Configure your deployment:
- Select model
- Choose GPU and region
- Configure audit chain
- Deploy or Export YAML
Export for Version Control
The wizard can export your configuration as LucidEnvironment YAML using the Export YAML button in the deployment wizard.
This exported YAML can be: - Committed to version control - Shared with team members - Applied via CLI to a different cluster - Modified and re-imported
Migration Between Modes
From Lucid-Managed to Self-Hosted
- Export your environment from Observer GUI
- Update
spec.infrastructure.providerto your target cloud - Apply via CLI:
lucid apply -f exported-env.yaml
From Self-Hosted to Lucid-Managed
- Export your environment:
lucid export my-env -o my-env.yaml - Import in Observer GUI (coming soon)
Comparison
| Aspect | Serverless (Lucid-Managed) | Self-Hosted |
|---|---|---|
| Setup time | Instant | Minutes-hours |
| Infrastructure | Lucid shared pools | You manage |
| Configuration | CLI flags or GUI wizard | Full YAML |
| TEE Security | ✅ Hardware attestation | ✅ Hardware attestation |
| Data residency | US/EU/APAC regions | Your control |
| Customization | Auditor profiles from catalog | Full access, custom auditors |
| Best for | Quick start, PoC, most teams | Enterprise, specific compliance |
When to Use Serverless (Lucid-Managed)
- Getting started with Lucid
- Proof of concept deployments
- Teams without K8s expertise
- Cost-effective for low-to-medium traffic
- Prefer GUI over CLI
When to Use Self-Hosted
- Full infrastructure control needed
- Specific compliance requirements
- High-volume production workloads
- Custom auditor implementations
- Air-gapped or private cloud environments
Deployment Type
Every Lucid deployment uses the model deployment type, which provisions a model with auditors and exposes an OpenAI-compatible API endpoint. You bring your own frontend or connect via workflows.
spec:
deployment_type: model
agents:
- name: backend-llm
model:
id: meta-llama/Llama-3.3-70B
Best Practices
- Start with serverless: Use
lucid apply --model Y --profile Zto prototype quickly - Version control environments: Store
LucidEnvironmentYAML in git for self-hosted - Use
diffbeforeapply: Review changes before deploying - Verify attestation: Use
lucid verifyto confirm TEE security client-side - Separate environments: Use different configs for dev/staging/prod