Architecture Overview
Lucid utilizes a multi-party chain of custody modeled after the IETF RATS (Remote ATtestation procedureS) Architecture (RFC 9334). It provides a framework for verifiable AI execution using hardware-based roots of trust.
Architecture Components
flowchart TB
subgraph Customer["π’ Customer Cluster"]
direction TB
Operator["βΈοΈ Lucid Operator"]
subgraph TEE["π Trusted Execution Environment (TEE)"]
direction TB
AI["π€ AI Workload<br/>(LLM / Model)"]
subgraph Chain["Auditor Chain"]
direction LR
A1["π Artifact<br/>Auditor"]
A2["π‘οΈ Request<br/>Auditor"]
A3["β‘ Execution<br/>Auditor"]
A4["π€ Response<br/>Auditor"]
A1 --> A2 --> A3 --> A4
end
AI <--> Chain
end
subgraph Attestation["π Attestation Layer"]
CoCo["CoCo AA/AS<br/>(or Mock)"]
end
Chain --> |"Signed<br/>Measurements"| CoCo
Operator -.-> |"Injects Sidecars"| TEE
end
subgraph SaaS["βοΈ Lucid SaaS Platform"]
direction TB
Verifier["β
Verifier<br/>(FastAPI)"]
Passport["π AI Passport"]
Observer["π Observer UI<br/>(Trust Dashboard)"]
Verifier --> |"Issues"| Passport
Passport --> Observer
end
CoCo --> |"Evidence"| Verifier
The 6-Step Verification Flow
The lifecycle of a secure AI request follows six distinct stages:
1. Workload Provisioning (The Attester)
The Lucid Operator identifies a verifiable workload. It provisions a TEE environment and injects the sidecars. The server acts as an "Attester," capable of producing cryptographic quotes of its internal state.
2. Policy Definition (The Rulebook)
Developers define safety logic using the Lucid SDK. This specification defines what data is allowed, what must be redacted, and what telemetry is collected across the four phases (Build, Input, Runtime, Output).
3. Evidence Collection
As the model runs, sidecars monitor behavior and collect cryptographically signed Evidence through the hardware Attestation Agent.
4. Evidence Appraisal (The Verifier)
The Lucid Verifier appraises the collected Evidence against the defined policy. It validates hardware quotes to ensure the environment has not been tampered with.
5. Observability & Logging (The Observer)
Verified results and audit logs are stored by the Lucid Observer. This provides a transparent record for auditing model behavior in real-time.
6. Passport Verification (Relying Party)
Downstream systems (Relying Parties) can verify the AI Passport. This ensures that the model response was generated within a compliant TEE environment.
π Operational Modes: Mock vs. Production
Lucid supports two modes to balance development speed with production security.
| Service | Local (Mock Mode) | Production (CoCo/TEE) |
|---|---|---|
| Hardware | Standard CPU | Intel SGX, AMD SEV, AWS Nitro |
| Signing | Mock AA (ECDSA) | CoCo AA (Hardware TEE Quote) |
| Verification | Mock AS | CoCo AS (Hardware Trust Root) |
| Security | Logic Simulation | Hardware-Enforced |
Both modes use identical API contracts, ensuring that code developed locally functions unchanged in production TEE environments.
Code Portability
You can develop 100% of your safety logic locally using Mock Mode. The same code will function identically when deployed to a hardware-secured cluster.
System Sequence Diagram
sequenceDiagram
participant User
participant CLI as Lucid CLI
participant K8s as K8s (Operator)
participant TEE as TEE (Enclave)
participant Verifier as Verifier (Appraisal)
User->>CLI: lucid deploy apply
CLI->>K8s: Provision Attester (TEE)
K8s->>TEE: Inject Sidecars & Policy
User->>TEE: Request (Logic Check)
TEE->>TEE: Collect Evidence
TEE->>Verifier: Appraise Evidence
Verifier-->>TEE: Evidence Verified
TEE-->>User: Result + AI Passport
π What's Next?
- Deep dive into the Auditor Phases to see where to place your logic.
- Check out the Getting Started Guide to build and deploy your first auditor.
- See the Glossary for definitions of security terms.