Skip to content

Deployment Modes

This guide helps you choose the right deployment mode and understand the trade-offs between serverless and self-hosted options.

Ready to deploy?

Once you've chosen a mode, see the Deployment Guide for detailed step-by-step instructions.

Connect your development tools

After deploying, see the Integration Guide to connect tools like OpenCode and OpenClaw to your agent.


Lucid supports two deployment modes:

Mode Interface Infrastructure Best For
Serverless (Lucid-Managed) CLI or Observer GUI Lucid shared pools Quick start, PoC, instant deployment
Self-Hosted CLI Your K8s cluster Enterprise, full control

Both modes provide identical TEE security guarantees — the only difference is who manages the infrastructure.

The fastest way to deploy AI workloads with TEE security guarantees. No infrastructure provisioning — deploy in seconds.

How It Works

flowchart LR
    subgraph You["Your Side"]
        CLI["lucid apply<br/>--app --model"]
        App["Your Application"]
    end

    subgraph Lucid["Lucid Platform"]
        Config["Environment<br/>Config"]
        subgraph Pools["Shared Pools (TEE)"]
            M["🧠 Models"]
            A["🛡️ Auditors"]
            P["📱 Apps"]
        end
    end

    CLI --> Config
    Config --> Pools
    App -->|"Direct TLS"| Pools

Key properties: - Instant deployment - No infrastructure provisioning needed - Same TEE security - Hardware attestation identical to self-hosted - Zero-trust - You can verify attestation directly against Intel/AMD root of trust - Data isolation - TEE memory isolation ensures Lucid never sees your plaintext data - Automatic GPU optimization - Significant inference cost savings via batch tuning and quantization

How Serverless Routing Works

When you deploy to serverless, the system creates an environment configuration, assigns it to available TEE resources matching your requirements (model, region, data residency), and provides routing endpoints. Your application connects directly to TEE endpoints over TLS — Lucid infrastructure is never in the data path.

You can verify TEE attestation client-side against the hardware vendor's root of trust using lucid verify.

CLI Usage

# Deploy with app, model, and auditor profile
lucid apply --app open-webui --model llama-3.1-8b --profile chat

# Deploy with data residency requirement
lucid apply --model qwen-72b --profile coding --region eu

# Browse available resources
lucid catalog models
lucid catalog auditors
lucid catalog apps

# Verify TEE attestation (client-side, against hardware root of trust)
lucid verify endpoint https://env-abc123.serverless.lucid.ai

Observer GUI Usage

  1. Open Observer and select "Deploy"
  2. Choose "Serverless" (default)
  3. Select your app, model, and auditor profile from the catalog
  4. Click Deploy — your environment is ready instantly

The Shared Contract: LucidEnvironment

Both modes use the LucidEnvironment CRD format as their configuration contract:

apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
  name: my-platform
spec:
  infrastructure:
    provider: gcp
    region: us-central1
    # ...
  agents:
    - name: my-agent
      model:
        id: meta-llama/Llama-3.3-70B
      # ...
  apps:
    - appId: openhands
      teeMode: adjacent
  services:
    observability:
      enabled: true

This format captures everything needed for a complete deployment: - Infrastructure: Cloud provider, region, cluster configuration, node pools - Agents: LLM deployments with models, GPUs, and audit chains - Apps: Catalog applications to deploy alongside agents - Services: Observability, gateway, vector database

Self-Hosted Deployment (CLI)

Use the CLI when you want full control over your infrastructure.

Prerequisites

  • Kubernetes cluster (GKE, EKS, AKS, or local)
  • kubectl configured
  • Lucid CLI installed (available to alpha participants)

Workflow

# 1. Authenticate
lucid login

# 2. Write your environment configuration
cat > my-env.yaml << 'EOF'
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
  name: prod-agents
spec:
  infrastructure:
    provider: gcp
    region: us-central1
    projectId: my-project
    cluster:
      name: lucid-cluster
  agents:
    - name: assistant
      model:
        id: meta-llama/Llama-3.3-70B
      gpu:
        type: H100
        memory: 80GB
EOF

# 3. Preview what will be deployed
lucid diff -f my-env.yaml

# 4. Deploy
lucid apply -f my-env.yaml

What apply Does

The lucid apply command orchestrates the full deployment:

  1. Infrastructure Provisioning (if provider != local)
  2. Creates cloud resources (GKE/EKS/AKS cluster)
  3. Configures networking, node pools, GPUs

  4. Cluster Setup (if operator not installed)

  5. Installs Lucid operator
  6. Configures RBAC, webhooks

  7. Agent Deployment

  8. Creates agents via Verifier API
  9. Configures audit chains

  10. App Deployment

  11. Deploys catalog apps (coming soon)

Flags

Flag Description
--skip-infra Skip infrastructure provisioning (use existing cluster)
-y, --yes Skip confirmation prompts
--managed Use Lucid managed deployment

Local Development

For local development, you need a local Kubernetes cluster. You can use kind, minikube, or Docker Desktop with Kubernetes enabled.

# Option 1: Create a cluster with kind
kind create cluster --name lucid-dev

# Option 2: Create a cluster with minikube
minikube start --driver=docker

# Option 3: Enable Kubernetes in Docker Desktop settings
# (No CLI command needed - use the Docker Desktop UI)

Once your cluster is running, deploy your environment:

# Verify cluster is accessible
kubectl cluster-info

# Deploy environment
lucid apply -f my-env.yaml -y

# Check status
lucid status

# View logs
lucid logs my-agent

# Teardown
lucid teardown

To clean up the local cluster when done:

# For kind
kind delete cluster --name lucid-dev

# For minikube
minikube delete

The operator URL is auto-detected from LUCID_OPERATOR_URL env var or defaults to localhost:8443.

Lucid-Managed Deployment (Observer GUI)

Use the Observer GUI when you want Lucid to handle infrastructure.

Workflow

  1. Open the Deployment Wizard in Observer
  2. Configure your deployment:
  3. Select model
  4. Choose GPU and region
  5. Configure audit chain
  6. Select apps
  7. Deploy or Export YAML

Export for Version Control

The wizard can export your configuration as LucidEnvironment YAML:

// In the wizard
wizardStore.downloadAsYaml();  // Downloads my-environment.yaml

This exported YAML can be: - Committed to version control - Shared with team members - Applied via CLI to a different cluster - Modified and re-imported

Migration Between Modes

From Lucid-Managed to Self-Hosted

  1. Export your environment from Observer GUI
  2. Update spec.infrastructure.provider to your target cloud
  3. Apply via CLI:
    lucid apply -f exported-env.yaml
    

From Self-Hosted to Lucid-Managed

  1. Export your environment:
    lucid export my-env -o my-env.yaml
    
  2. Import in Observer GUI (coming soon)

Comparison

Aspect Serverless (Lucid-Managed) Self-Hosted
Setup time Instant Minutes-hours
Infrastructure Lucid shared pools You manage
Configuration CLI flags or GUI wizard Full YAML
TEE Security ✅ Hardware attestation ✅ Hardware attestation
Data residency US/EU/APAC regions Your control
Customization Auditor profiles from catalog Full access, custom auditors
Best for Quick start, PoC, most teams Enterprise, specific compliance

When to Use Serverless (Lucid-Managed)

  • Getting started with Lucid
  • Proof of concept deployments
  • Teams without K8s expertise
  • Cost-effective for low-to-medium traffic
  • Prefer GUI over CLI

When to Use Self-Hosted

  • Full infrastructure control needed
  • Specific compliance requirements
  • High-volume production workloads
  • Custom auditor implementations
  • Air-gapped or private cloud environments

Deployment Types

Every Lucid deployment has a type that determines which components are provisioned. The type is set via the deployment_type field in the environment spec.

full -- Traditional Deployment (Default)

The standard deployment: a model, a user-facing app, and an auditor chain -- all running inside a TEE.

spec:
  deployment_type: full
  agents:
    - name: assistant
      model:
        id: meta-llama/Llama-3.3-70B
  apps:
    - appId: open-webui

Use when: You need a complete, self-contained AI application with a UI.

model -- Headless Model API

A model with auditors but no user-facing frontend. Exposes an OpenAI-compatible API endpoint that other deployments or workflows can call.

spec:
  deployment_type: model
  agents:
    - name: backend-llm
      model:
        id: meta-llama/Llama-3.3-70B

Use when: The model serves as a backend for workflows, or you bring your own frontend. This is the most common node type in workflows.

app -- Frontend Only

A frontend application with no bundled LLM backend. The app receives its LLM backend from a workflow orchestrator that routes traffic to one or more model deployments.

spec:
  deployment_type: app
  apps:
    - appId: open-webui

Use when: You want to decouple the UI from the model, or a single frontend needs to route between multiple model backends via a workflow.

bridge -- Protocol Adapter

A lightweight adapter that translates between an external protocol and the Lucid-internal OpenAI-compatible API. Bridges run with auditors but no model -- they proxy requests to a downstream model deployment.

spec:
  deployment_type: bridge
  bridge:
    type: chatwoot
    webhook_url: https://chatwoot.example.com/webhooks

Use when: You need to integrate an external system (e.g., Chatwoot for customer support) that speaks a different protocol than OpenAI-compatible chat completions.

Composing deployment types with workflows

Deployment types become powerful when combined via Workflows. A typical pattern: an app deployment provides the UI, a workflow orchestrator routes user intents to specialized model deployments, and a bridge deployment connects external channels like Chatwoot.


Best Practices

  1. Start with serverless: Use lucid apply --app X --model Y to prototype quickly
  2. Version control environments: Store LucidEnvironment YAML in git for self-hosted
  3. Use diff before apply: Review changes before deploying
  4. Verify attestation: Use lucid verify to confirm TEE security client-side
  5. Separate environments: Use different configs for dev/staging/prod
  6. Use model type for workflow backends: Keep model deployments headless when they serve as workflow nodes
  7. Use bridge for external integrations: Avoid custom protocol handling in your model or app code