FOR DEVELOPERS

Composable Trust Layer for Agentic AI.

Add a composable trust module around your AI agents with a single line of code. Run inside a Trusted Execution Environment (TEE) and generate hardware-signed audit trails that legal can verify, so you can get back to shipping. Deploy to AWS, Azure, GCP, or any TEE-enabled infrastructure.

01 — ARCHITECTURE

Decoupled Governance

Lucid separates governance from agent logic. An "Auditor" is a trust module that attaches to your agent runtime and observes inference without changing model behavior.

Zero Code Changes

Auditors are automatically applied at deployment time. No changes to your agent code or execution flow required.

Runtime Interception

The Auditor operates directly in the inference path (gRPC / REST), inspecting inputs and outputs before they exit the secure execution boundary.

Hardware Root of Trust

Unlike conventional middleware, Lucid Auditors run inside Trusted Execution Environments (Intel TDX, AMD SEV-SNP). They don't just record events, they cryptographically sign them using keys bound to the underlying processor.

02 — IMPLEMENTATION

Composable Policy Modules

Develop trust and validation policies in standard Python today, with first-class support for TEE remote attestation and inspection of agent inputs and outputs at execution time. Additional languages can be introduced without changing the trust model.

sovereignty_auditor.py
import lucid.auditor as auditor
from lucid.types import Request, Response, Decision

# 1. Enforce Data Residency via Hardware Attestation
@auditor.on_request
def enforce_sovereignty(ctx: Request) -> Decision:
    # Verify the cryptographic quote from the TEE hardware
    hw_quote = ctx.attestation.get_quote()

    if hw_quote.location.country_code != "SE":
        # Block request at the network layer; never reaches the model
        return auditor.Deny(
            reason="HARDWARE_LOCATION_MISMATCH",
            details={"required": "SE", "actual": hw_quote.location.country_code}
        )
    return auditor.Proceed()

# 2. Redact PII from Model Output (DLP)
@auditor.on_response
def scrub_output(ctx: Response) -> Decision:
    if ctx.detect_pii(threshold=0.9):
        # Mutate the response stream in-flight
        cleaned_text = ctx.redact(strategy="hash")
        return auditor.Modify(body=cleaned_text)

    # Log the hash of the clean response to the immutable ledger
    ctx.ledger.log(ctx.hash)
    return auditor.Proceed()
03 — WORKFLOW

From Local Dev to Secure Enclave

Treat compliance artifacts exactly like software artifacts.

Step 1: Package the Module

Author your trust or policy logic as a module. For example, package Python-based modules using a standard OCI container format.

Step 2: Measure & Sign

Build the module and generate a cryptographic measurement (MRENCLAVE) of its contents. This hash ensures that the auditor code cannot be tampered with by the cloud provider or a root user.

Step 3: Declarative Attachment

Declare how trust modules are composed and applied to your agents at deployment time. Lucid handles secure attachment and execution inside the enclave.

Dockerfile
FROM lucid/auditor-base:2.1-alpine
COPY sovereignty_auditor.py /app/main.py
# The base image handles the TEE bootstrap and attestation daemon
ENTRYPOINT ["/init", "python", "/app/main.py"]
terminal
$ lucid build -t my-org/sovereignty-auditor:v1 .
> Building OCI image...
> Measuring enclave binary...
> MRENCLAVE: 8f4b2e... (Signed by Dev Key)
> Pushing to Lucid Registry...
04 — ORCHESTRATION

Composable Audit Chains

Chain multiple auditors together to satisfy complex regulatory requirements (e.g., EU AI Act + GDPR). The chain operates as a DAG (Directed Acyclic Graph), where every step must pass for the transaction to succeed.

Audit Chain Builder
Live
EU AI Act v1.2
Sovereignty v1
deployment.yaml
apiVersion: lucid.sh/v1alpha1
kind: VerifiableAgent
metadata:
  name: finance-agent-prod
spec:
  # Your existing model container
  workload:
    image: internal/llama-3-finetune:v4
    gpu: nvidia-h100

  # The Governance Chain
  auditChain:
    policy: FailClosed # If sidecar crashes, network cuts traffic
    auditors:
      # 1. Marketplace Auditor (Off-the-shelf)
      - name: "eu-ai-act-logger"
        image: lucid-registry/eu-conformity:v1.2
        config:
          logLevel: "FORENSIC"

      # 2. Custom Auditor (Your internal logic)
      - name: "sovereignty-check"
        image: my-org/sovereignty-auditor:v1
        attestation:
          # Only allow this exact code hash to run
          measurement: "8f4b2e..."
05 — THE LUCID SDK

Why not just use LangChain/LangSmith?

Observability tools run in user-space. If the host is compromised, your logs are compromised. Lucid moves the "Truth" into the hardware.

Attestation-as-a-Service

We abstract the complexity of generating Intel SGX/TDX quotes.

Immutable Ledgering

Logs written via ctx.ledger are hashed and chained. You don't just get a log file; you get a mathematical proof of sequence.

Polyglot Support

Write auditors in Python, Go, or Rust.

Explore the SDK Documentation