Skip to content

Architecture Overview

Lucid utilizes a multi-party chain of custody modeled after the IETF RATS (Remote ATtestation procedureS) Architecture (RFC 9334). It provides a framework for verifiable AI execution using hardware-based roots of trust.

Architecture Components

flowchart TB
    subgraph Customer["🏒 Customer Cluster"]
        direction TB

        Operator["☸️ Lucid Operator"]

        subgraph TEE["πŸ”’ Trusted Execution Environment (TEE)"]
            direction TB
            AI["πŸ€– AI Workload<br/>(LLM / Model)"]

            subgraph Chain["Auditor Chain"]
                direction LR
                A1["πŸ“‹ Artifact<br/>Auditor"]
                A2["πŸ›‘οΈ Request<br/>Auditor"]
                A3["⚑ Execution<br/>Auditor"]
                A4["πŸ“€ Response<br/>Auditor"]
                A1 --> A2 --> A3 --> A4
            end

            AI <--> Chain
        end

        subgraph Attestation["πŸ” Attestation Layer"]
            CoCo["CoCo AA/AS<br/>(or Mock)"]
        end

        Chain --> |"Signed<br/>Measurements"| CoCo
        Operator -.-> |"Injects Sidecars"| TEE
    end

    subgraph SaaS["☁️ Lucid SaaS Platform"]
        direction TB
        Verifier["βœ… Verifier<br/>(FastAPI)"]
        Passport["πŸ›‚ AI Passport"]
        Observer["πŸ“Š Observer UI<br/>(Trust Dashboard)"]

        Verifier --> |"Issues"| Passport
        Passport --> Observer
    end

    CoCo --> |"Evidence"| Verifier

Serverless Architecture

In serverless mode, Lucid manages shared TEE resource pools. Customers get instant deployment without provisioning infrastructure, while maintaining the same hardware-backed security guarantees.

See the Deployment Modes guide for serverless configuration and the TEE concepts page for attestation details.


The 6-Step Verification Flow

The lifecycle of a secure AI request follows six distinct stages:

1. Workload Provisioning (The Attester)

The Lucid Operator identifies a verifiable workload. It provisions a TEE environment and injects the sidecars. The server acts as an "Attester," capable of producing cryptographic quotes of its internal state.

2. Policy Definition (The Rulebook)

Developers define safety logic using the Lucid SDK. This specification defines what data is allowed, what must be redacted, and what telemetry is collected across the four phases (Build, Input, Runtime, Output).

3. Evidence Collection

As the model runs, sidecars monitor behavior and collect cryptographically signed Evidence through the hardware Attestation Agent.

4. Evidence Appraisal (The Verifier)

The Lucid Verifier appraises the collected Evidence against the defined policy. It validates hardware quotes to ensure the environment has not been tampered with.

5. Observability & Logging (The Observer)

Verified results and audit logs are stored by the Lucid Observer. This provides a transparent record for auditing model behavior in real-time.

6. Passport Verification (Relying Party)

Downstream systems (Relying Parties) can verify the AI Passport. This ensures that the model response was generated within a compliant TEE environment.

πŸ” Operational Modes: Mock vs. Production

Lucid supports two modes to balance development speed with production security.

Service Local (Mock Mode) Production (CoCo/TEE)
Hardware Standard CPU Intel SGX, AMD SEV, AWS Nitro
Signing Mock AA (ECDSA) CoCo AA (Hardware TEE Quote)
Verification Mock AS CoCo AS (Hardware Trust Root)
Security Logic Simulation Hardware-Enforced

Both modes use identical API contracts, ensuring that code developed locally functions unchanged in production TEE environments.

Code Portability

You can develop 100% of your safety logic locally using Mock Mode. The same code will function identically when deployed to a hardware-secured cluster.

System Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as Lucid CLI
    participant K8s as K8s (Operator)
    participant TEE as TEE (Enclave)
    participant Verifier as Verifier (Appraisal)

    User->>CLI: lucid deploy apply
    CLI->>K8s: Provision Attester (TEE)
    K8s->>TEE: Inject Sidecars & Policy
    User->>TEE: Request (Logic Check)
    TEE->>TEE: Collect Evidence
    TEE->>Verifier: Appraise Evidence
    Verifier-->>TEE: Evidence Verified
    TEE-->>User: Result + AI Passport

πŸ”§ Hardware Endorser Devices

For high-assurance deployments, Lucid supports hardware endorser devices that provide additional cryptographic attestation beyond standard TEE quotes:

Device Role Signal Provided
DC-SCM Hardware root of trust Power telemetry, secure boot, tamper detection
FlexNIC Network monitoring Collective detection, flow patterns
GPU CC Confidential computing Memory encryption, GPU attestation
TPM 2.0 Platform integrity PCR measurements, measured boot

Multi-Signal Verification

When endorser devices are present, the system correlates four independent signals for workload classification:

  1. Kernel structure (Inspector TEE): Training = backward pass + optimizer
  2. Network patterns (FlexNIC): Training = all-reduce collectives
  3. Power profile (DC-SCM): Training = sustained high utilization
  4. Memory behavior (Inspector TEE): Training = stores activations

All signals must be mutually consistent for high-confidence classification. This creates defense-in-depth: an attacker cannot forge one signal without creating detectable inconsistencies in the others.

Extended Deployment Types

Beyond the traditional full-stack deployment, Lucid supports four deployment types that can be composed into complex architectures:

Type Components Use Case
full Model + App + Auditors Traditional end-to-end deployment (today's default)
model Model + Auditors (no UI) Headless API backend, used by workflows
app Frontend only (no LLM) UI that receives its LLM backend from a workflow orchestrator
bridge Protocol adapter Translates between external protocols and OpenAI-compatible APIs
flowchart LR
    subgraph Full["full deployment"]
        FA["App"] --> FM["Model"]
        FM --> FAud["Auditors"]
    end

    subgraph Headless["model deployment"]
        HM["Model"] --> HAud["Auditors"]
    end

    subgraph Frontend["app deployment"]
        AA["App"] -->|"via workflow"| HM
    end

    subgraph Adapter["bridge deployment"]
        BW["Webhook"] --> BT["Translator"] --> HM
    end

Each deployment type carries its own TEE attestation and auditor chain. The deployment type is set via the deployment_type field in the LucidEnvironment spec.

Workflows: Composing Deployments

Workflows are a composition layer that wires typed deployments together into a single logical application. A workflow is a JSON graph where nodes reference deployments and edges define intent-based routing conditions.

The key design principle: the LLM is the router. Workflows compile down to an orchestrator system prompt and a set of MCP tool registrations. There is no runtime engine, no LangGraph, no Temporal -- the orchestrator LLM reads the system prompt and uses MCP tools to route requests to the appropriate downstream deployments.

See the Workflows concept page for full documentation.

MCP: Inter-Service Communication

Every Lucid service (auditors, verifier, gateway) now exposes MCP (Model Context Protocol) tools via a /mcp endpoint. Services publish tool metadata at /.well-known/mcp for discovery.

MCP serves two roles in the architecture:

  1. Workflow routing -- The orchestrator LLM calls MCP tools to dispatch requests to downstream deployments
  2. Service integration -- External systems access Lucid capabilities (PII scanning, guardrails checks, deployment management) through a unified tool interface

The MCP Gateway federates tool access across all services, providing a single entry point with OAuth 2.1 authentication for external clients and mTLS for internal service-to-service calls.

See the MCP concept page for details.

What's Next?

  • Deep dive into the Auditor Phases to see where to place your logic.
  • Check out the First Auditor Guide to build and deploy your first auditor.
  • Learn about Workflows for composing deployments.
  • Explore MCP for inter-service communication.
  • See the Glossary for definitions of security terms.