Skip to content

Deployment Guide

This guide covers the step-by-step workflow for deploying AI agents and auditors using the Lucid platform.

Alpha Access Required

Lucid is in private alpha. Request access to get started.

Not sure which deployment mode to use?

See Deployment Modes for a comparison of serverless vs self-hosted options and guidance on choosing the right approach for your use case.

Connect your development tools

After deploying, see the Integration Guide to connect tools like OpenCode and Aider to your agent.

Lucid supports two deployment modes:

Mode Command Best For
Serverless (Lucid-Managed) lucid apply --model Y --profile Z or Observer GUI Quick start, instant deployment
Self-Hosted lucid apply -f env.yaml -f workspace.yaml Full infrastructure control

Both modes provide identical TEE security guarantees.


The fastest way to get started. Deploy instantly to Lucid's shared infrastructure with the same TEE security guarantees as self-hosted.

Step 1: Authenticate

lucid login -e [email protected] -p mypasswordLogged in as [email protected]

Step 2: Browse Available Resources

lucid catalog modelsID DESCRIPTION CONTEXT TEE
meta-llama/Llama-3.1-8B Llama 3.1 8B Instruct 128K ✓
meta-llama/Llama-3.1-70B Llama 3.1 70B Instruct 128K ✓
Qwen/Qwen2.5-72B-Instruct Qwen 2.5 72B 128K ✓
microsoft/Phi-3.5-mini Phi 3.5 Mini 128K ✓
lucid catalog auditorsPROFILE AUDITORS
coding model-context, llm-judge
chat model-context, llm-judge
default model-context

Step 3: Deploy Instantly

lucid apply --model llama-3.1-8b --profile chat[*] Creating serverless environment...
[+] Environment created: env-abc123def456

Connection URL: https://env-abc123def456.serverless.lucid.ai
Model: meta-llama/Llama-3.1-8B-Instruct
Auditors: model-context, llm-judge
Region: us-east-1

[+] Environment ready! No infrastructure provisioning needed.

Step 4: Verify TEE Attestation

lucid verify environment env-abc123def456[*] Fetching routing info for environment env-abc123def456...
[+] Model: https://model-xyz.serverless.lucid.ai (us-east-1) - amd_sev_snp
[+] Auditor: https://auditor-abc.serverless.lucid.ai (us-east-1) - amd_sev_snp

[*] 2/2 endpoints have attestation reports

Serverless Options

Flag Description
--model <id> Model from catalog (e.g., llama-3.1-8b)
--profile <name> Auditor profile (coding, chat, workflow, customer, default)
--region <code> Data residency (us, eu, apac, any)

Self-Hosted Deployment

For full control over your infrastructure, use declarative YAML configuration and the lucid apply command.

Step 1: Authenticate

lucid login -e [email protected] -p mypasswordLogged in as [email protected]

Or use an API key for automation:

lucid login --api-key <your-api-key>Authenticated with API key

Step 2: Create Environment YAML

Create a my-env.yaml file defining your infrastructure:

apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
  name: prod-agents
spec:
  infrastructure:
    provider: gcp
    region: us-central1
    projectId: my-project
  workspaces:
    - my-workspace

Create a workspace.yaml defining your agents:

apiVersion: lucid.io/v1alpha1
kind: LucidWorkspace
metadata:
  name: my-workspace
spec:
  defaults:
    auditorChain:
      preRequest:
        - auditorId: lucid-llm-judge-auditor
  agents:
    - name: my-agent
      model:
        id: meta-llama/Llama-3.3-70B
      gpu:
        type: H100
        memory: 80GB

Step 3: Preview and Deploy

lucid diff -f my-env.yaml -f workspace.yamlEnvironment: prod-agents

Infrastructure:
Provider: gcp
Region: us-central1

Agents (1):
- my-agent [enabled]
Model: meta-llama/Llama-3.3-70B
GPU: H100 (80GB)
Auditors: 1

Run 'lucid apply -f <file>' to deploy.
lucid apply -f my-env.yaml -f workspace.yamlCreating agent: my-agent...
Created: agent-abc123

[+] Environment deployed successfully!

Step 4: Monitor and Manage

lucid statusContext: prod

Agents (1):
NAME STATUS MODEL GPU
my-agent running meta-llama/Llama-3.3-70B H100
lucid logs my-agent[2024-01-15 10:30:00] Agent started
[2024-01-15 10:30:01] Auditors initialized
lucid stop my-agentAgent 'my-agent' stopped.
lucid start my-agentAgent 'my-agent' started.

Step 5: View AI Passports

lucid passport listID AGENT TIMESTAMP
pass-001 agent-abc123 2024-01-15T10:30:00Z
pass-002 agent-abc123 2024-01-15T10:31:00Z
lucid passport show pass-001Passport ID: pass-001
Hardware Attested: true
TEE Type: AMD SEV-SNP
Auditors: injection, toxicity

Step 6: Configure Passport Display

Choose how end users see the AI Passport. Add passport configuration to your agent definition:

agents:
  - name: my-agent
    passport:
      display:
        mode: banner
        bannerPosition: top
      content:
        includedEvidence:
          - tee_attestation
          - auditor_claims
        validityDays: 30
Mode Description
banner Persistent bar at top/bottom of the page
floating Small badge in corner, expands on click
page_only No in-app UI, users visit /.lucid/passport
browser_extension Users install Lucid Passport Verifier extension

See AI Passports for full configuration options.


Local Development

For local development, create a local Kind cluster:

kind create cluster --name lucid-devCreating cluster "lucid-dev" ...
✓ Ensuring node image
✓ Preparing nodes
✓ Writing configuration
✓ Starting control-plane
✓ Installing CNI
✓ Installing StorageClass
Set kubectl context to "kind-lucid-dev"

Then deploy a test environment. Create a file called dev-env.yaml:

apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
  name: local-test
spec:
  infrastructure:
    provider: local
  agents:
    - name: mini-chat
      model:
        id: mock
      gpu:
        type: CPU

Apply the configuration:

lucid apply -f dev-env.yaml -y[+] Environment deployed successfully!
lucid statusContext: dev
Cluster: lucid-local-k8s
Status: Running

Services:
[OK] verifier-service (1/1)
[OK] observer-ui (1/1)

Agents (1):
NAME STATUS MODEL GPU
mini-chat running mock CPU

Teardown

lucid teardownDelete kind cluster 'lucid-local-k8s'? This cannot be undone. [y/N]: y
Deleting cluster 'lucid-local-k8s'...
Cluster deleted.
CLI context reset to production.

Building & Publishing Auditors

Before deploying custom auditors, you must build, verify, and publish them to the Lucid registry.

Build & Verify Auditors

Ensure your Auditor container is compliant with the Lucid Standard.

docker build -t my-auditor:v1 .Successfully built my-auditor:v1lucid auditor verify my-auditor:v1[+] Basic labels found.
[+] Compliance probe successful!
[*] Verification complete. Auditor is compliant.

Publish to Registry

To deploy auditors in production, every image must be notarized. This registers the container's cryptographic digest with the Lucid Verifier.

lucid auditor publish my-auditor:v1Pushing image to registry...
Registering digest with Verifier...
[+] Auditor published and notarized.

Define the Safety Policy

Define your safety guardrails in the auditorChain section of your agent configuration. Use auditorSettings to configure auditor-specific parameters:

agents:
  - name: my-agent
    auditorChain:
      preRequest:
        - auditorId: lucid-llm-judge-auditor
      postResponse:
        - auditorId: lucid-llm-judge-auditor

See the Policy as Code guide for full schema details.


Using Official Auditor Images

Lucid provides official auditor images to alpha participants:

docker pull us-central1-docker.pkg.dev/lucid-public/lucid-auditors/lucid-llm-judge-auditor:latest

Available auditors:

Auditor Port Description
lucid-llm-judge-auditor 8098 LLM-driven guardrails -- jailbreak, fact-checking, PII, hallucination

Command Reference

Command Description
lucid apply -f env.yaml -f workspace.yaml Deploy environment from YAML
lucid diff -f env.yaml Preview changes before applying
lucid status List all agents
lucid status <agent-name> Show specific agent status
lucid logs <agent-name> View agent logs
lucid logs <agent-name> -f Stream agent logs
lucid start <agent-name> Start a stopped agent
lucid stop <agent-name> Stop a running agent
lucid teardown Delete local Kind cluster
lucid export <name> -o env.yaml Export cluster state to YAML

What Happens Under the Hood

When you deploy an agent, the Lucid platform:

  1. Provisions TEE Environment: Creates a secure execution environment with hardware-based isolation
  2. Injects Sidecars: Adds all auditors defined in your chain to the workload
  3. Configures Networking: Routes traffic through the auditor sequence
  4. Enforces Attestation: Ensures all components are cryptographically verified

Verifiable Agent Pod (VAP) Deployment

VAP provides a streamlined path for deploying AI agents as TEE-attested containers with built-in governance. If your agent image is based on lucid-agent-base, it automatically gets TEE execution, mTLS, Cedar enforcement, auditor chain, passport, owner binding, and credential injection.

VAP Deployment Flow

  1. Identity Creation: Agent identity (email, handle) is created during the Quick Deploy wizard. The deploying user becomes the first Owner.
  2. Credential Setup: Owner connects services (Google, Slack, GitHub) via OAuth on the agent's Credentials page. Credentials are managed through agent-blind routing rules in the agent's credentialRouting configuration.
  3. Container Bootstrap: The Operator provisions a TEE pod and injects CoCo AA + mTLS sidecars (same as standard auditor deployment).
  4. Startup Handshake: The agent entrypoint performs a registration handshake with the Verifier, presenting its TEE quote. The Verifier validates the quote, binds the agent identity, and issues a bootstrap token.
  5. Steady State: The Verifier injects credentials on matching requests. Cedar policies (derived from auditor settings cascade) control what the agent can access. Every action produces signed evidence for the passport.

Container Images

Image Purpose
Dockerfile.agent-base Production default — hardened image with no shell, read-only filesystem, CoCo labels, and non-root execution. Use this for all deployments.
Dockerfile.agent-dev Development image with shell access and writable rootfs. For local dev and debugging only.

Example VAP Dockerfile

FROM lucid-agent-base:latest
COPY my_agent/ /app/
# Use lucid-auditor-sdk for tools -- the Operator handles the rest:
#   - CoCo AA + mTLS sidecar injection
#   - Cedar policy loading and auditor chain attachment
#   - Credential injection via the Verifier
#   - Evidence collection and passport generation

Cloud Provider Requirements

To use real Hardware Root of Trust, your workloads run on TEE-capable infrastructure managed by Lucid:

Provider Hardware
GCP N2DL nodes (AMD SEV-SNP) with Confidential Computing
Azure DCsv3 or ECsv3 nodes (Intel SGX)
AWS Nitro-based instances with Enclaves

Select your preferred region and hardware in your environment YAML:

spec:
  infrastructure:
    provider: gcp
    region: us-central1
  agents:
    - name: my-agent
      gpu:
        type: H100
        memory: 80GB