Cluster Setup
Alpha Access Required
Lucid is in private alpha. Request access to get started.
This guide covers how to connect your AI workloads to the Lucid platform for TEE-based security.
Using Lucid-Managed Infrastructure (Recommended)
The simplest way to use Lucid is through the hosted platform, where TEE infrastructure is managed for you.
Step 1: Authenticate
Step 2: Deploy from YAML
Create a my-env.yaml file:
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: my-platform
spec:
infrastructure:
provider: gcp
region: us-central1
agents:
- name: my-agent
model:
id: meta-llama/Llama-3.3-70B
gpu:
type: H100
memory: 80GB
Deploy it:
Step 3: Verify Status
ID: agent-abc123
Status: running
Model: meta-llama/Llama-3.3-70B
GPU: H100
That's it! Your agent is running on TEE-capable hardware managed by Lucid.
Self-Hosted Deployment (Advanced)
For users who want full control, deploy to your own Kubernetes cluster using lucid apply.
Prerequisites
- Kubernetes cluster (GKE, EKS, AKS, or local Kind/minikube)
kubectlconfigured and connected to your cluster- TEE-capable nodes (for production) or mock mode (for development)
| Provider | TEE Requirement |
|---|---|
| GCP | GKE with Confidential GKE Nodes (AMD SEV-SNP) |
| Azure | AKS with DCsv3 or ECsv3 nodes (Intel SGX) |
| AWS | EKS with Nitro Enclaves enabled |
| Local | Kind or minikube with mock TEE mode (see Local Development) |
Step 1: Create Environment File
Create a my-env.yaml file:
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: my-platform
spec:
infrastructure:
provider: gcp # gcp | aws | azure | local
region: us-central1
projectId: my-gcp-project
confidentialComputing:
enabled: true
cluster:
name: lucid-cluster
nodePools:
- name: gpu-pool
machineType: a3-highgpu-8g
gpuType: nvidia-h100-80gb
gpuCount: 8
agents:
- name: my-assistant
model:
id: meta-llama/Llama-3.3-70B
gpu:
type: H100
memory: 80GB
services:
observability:
enabled: true
Step 2: Preview Deployment
Infrastructure:
Provider: gcp
Region: us-central1
Cluster: lucid-cluster
CC Mode: True
Agents (1):
- my-assistant [enabled]
Model: meta-llama/Llama-3.3-70B
GPU: H100 (80GB)
Run 'lucid apply -f <file>' to deploy.
Step 3: Deploy
- Infrastructure: gcp / us-central1
- Cluster: lucid-cluster
- 1 agent(s)
- Will provision GCP resources (this may incur costs)
Proceed? [y/N]: y
==================================================
Step 1: Infrastructure Provisioning
==================================================
[*] Provisioning GCP infrastructure...
[+] Infrastructure provisioned: lucid-cluster
==================================================
Step 2: Cluster Setup
==================================================
[*] Checking operator at http://localhost:8443...
[!] Lucid operator not found on cluster
Install the Lucid operator now? [y/N]: y
[*] Running: helm install lucid-operator oci://us-central1-docker.pkg.dev/lucid-prod/lucid-charts/lucid-operator -n lucid-system --create-namespace --wait --timeout 5m
[+] Operator installed successfully
[*] Waiting for operator to become ready...
[+] Operator is healthy
==================================================
Agent Deployment
==================================================
Creating agent: my-assistant...
Created: agent-abc123
==================================================
Deployment Summary
==================================================
Infrastructure: provisioned
Cluster: installed
Agents: 1 deployed
[+] Environment deployed successfully!
Flags
| Flag | Description |
|---|---|
--skip-infra |
Use existing cluster, skip infrastructure provisioning |
-y, --yes |
Skip confirmation prompts |
--managed |
Use Lucid managed deployment |
Local Development (Mock Mode)
For local development and testing, you can set up a local Kind cluster with mock TEE mode.
Setup with Kind
First, create a Kind cluster:
✓ Cluster ready
# Verify kubectl contextkubectl cluster-info --context kind-lucid-localKubernetes control plane is running at https://127.0.0.1:PORT
Then create a local environment file (local-env.yaml):
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: local-dev
spec:
infrastructure:
provider: local
agents:
- name: test-agent
model:
id: meta-llama/Llama-3.3-70B
gpu:
type: mock # Uses mock GPU for local development
Deploy it:
Observer UI: http://localhost:3000
Verifier API: http://localhost:8000
The operator URL is auto-detected from LUCID_OPERATOR_URL env var or defaults to localhost:8443.
Mock Mode provides:
- Identical API contracts as production
- Software-based attestation signatures
- Full auditor chain functionality
- Safe development environment
Code Portability
Code developed in Mock Mode works unchanged when deployed to production TEE environments.
Next Steps
Once your cluster is ready:
- Build Your First Auditor - Create a custom safety auditor
- Deployment Modes Guide - Learn about CLI vs GUI deployment
- View AI Passports - Verify attestation results