Add a composable trust module around your AI agents with a single line of code. Run inside a Trusted Execution Environment (TEE) and generate hardware-signed audit trails that legal can verify, so you can get back to shipping. Deploy to AWS, Azure, GCP, or any TEE-enabled infrastructure.
Lucid separates governance from agent logic. An "Auditor" is a trust module that attaches to your agent runtime and observes inference without changing model behavior.
Auditors are automatically applied at deployment time. No changes to your agent code or execution flow required.
The Auditor operates directly in the inference path (gRPC / REST), inspecting inputs and outputs before they exit the secure execution boundary.
Unlike conventional middleware, Lucid Auditors run inside Trusted Execution Environments (Intel TDX, AMD SEV-SNP). They don't just record events, they cryptographically sign them using keys bound to the underlying processor.
Develop trust and validation policies in standard Python today, with first-class support for TEE remote attestation and inspection of agent inputs and outputs at execution time. Additional languages can be introduced without changing the trust model.
import lucid.auditor as auditor
from lucid.types import Request, Response, Decision
# 1. Enforce Data Residency via Hardware Attestation
@auditor.on_request
def enforce_sovereignty(ctx: Request) -> Decision:
# Verify the cryptographic quote from the TEE hardware
hw_quote = ctx.attestation.get_quote()
if hw_quote.location.country_code != "SE":
# Block request at the network layer; never reaches the model
return auditor.Deny(
reason="HARDWARE_LOCATION_MISMATCH",
details={"required": "SE", "actual": hw_quote.location.country_code}
)
return auditor.Proceed()
# 2. Redact PII from Model Output (DLP)
@auditor.on_response
def scrub_output(ctx: Response) -> Decision:
if ctx.detect_pii(threshold=0.9):
# Mutate the response stream in-flight
cleaned_text = ctx.redact(strategy="hash")
return auditor.Modify(body=cleaned_text)
# Log the hash of the clean response to the immutable ledger
ctx.ledger.log(ctx.hash)
return auditor.Proceed()
Treat compliance artifacts exactly like software artifacts.
Author your trust or policy logic as a module. For example, package Python-based modules using a standard OCI container format.
Build the module and generate a cryptographic measurement (MRENCLAVE) of its contents. This hash ensures that the auditor code cannot be tampered with by the cloud provider or a root user.
Declare how trust modules are composed and applied to your agents at deployment time. Lucid handles secure attachment and execution inside the enclave.
FROM lucid/auditor-base:2.1-alpine
COPY sovereignty_auditor.py /app/main.py
# The base image handles the TEE bootstrap and attestation daemon
ENTRYPOINT ["/init", "python", "/app/main.py"]
$ lucid build -t my-org/sovereignty-auditor:v1 .
> Building OCI image...
> Measuring enclave binary...
> MRENCLAVE: 8f4b2e... (Signed by Dev Key)
> Pushing to Lucid Registry...
Chain multiple auditors together to satisfy complex regulatory requirements (e.g., EU AI Act + GDPR). The chain operates as a DAG (Directed Acyclic Graph), where every step must pass for the transaction to succeed.
apiVersion: lucid.sh/v1alpha1
kind: VerifiableAgent
metadata:
name: finance-agent-prod
spec:
# Your existing model container
workload:
image: internal/llama-3-finetune:v4
gpu: nvidia-h100
# The Governance Chain
auditChain:
policy: FailClosed # If sidecar crashes, network cuts traffic
auditors:
# 1. Marketplace Auditor (Off-the-shelf)
- name: "eu-ai-act-logger"
image: lucid-registry/eu-conformity:v1.2
config:
logLevel: "FORENSIC"
# 2. Custom Auditor (Your internal logic)
- name: "sovereignty-check"
image: my-org/sovereignty-auditor:v1
attestation:
# Only allow this exact code hash to run
measurement: "8f4b2e..."
Observability tools run in user-space. If the host is compromised, your logs are compromised. Lucid moves the "Truth" into the hardware.
We abstract the complexity of generating Intel SGX/TDX quotes.
Logs written via ctx.ledger are hashed and chained. You don't just get a log file; you get a mathematical proof of sequence.
Write auditors in Python, Go, or Rust.