RESEARCH

Lucid Labs

Advancing the science of verifiable compute.

If you can't verify it, you can't trust it. We work on the hard problems at the intersection of cryptography, hardware, and policy that make AI systems provably trustworthy.

Our Research

The problems are hard.
That's the point.

Verifiable compute sits at the convergence of confidential computing, formal verification, and international policy. There is no single "solution" — it requires advances across multiple fronts simultaneously. Labs is where we do the work that doesn't fit into a product roadmap but matters more than anything on it.

Confidential Inference at Scale

Running LLMs inside Trusted Execution Environments (TEEs) comes with real constraints — memory limits, attestation overhead, performance degradation. We're working on extending context windows and improving throughput for models running in confidential enclaves, so that "verified" doesn't have to mean "slow."

TEEs / GPU enclaves / memory-safe inference / attestation pipelines

Verifiable Training

How do you prove a model was trained on the data you claim, using the process you specified, without exposing proprietary weights or datasets? This is the foundation of audit-ready AI. We're developing cryptographic evidence chains for the training pipeline — from data provenance through gradient computation to final checkpoint.

proof-of-training / data provenance / cryptographic audit trails / checkpoint attestation

Remote Location Verification

Data sovereignty laws require proof that computation happened in a specific jurisdiction. IP addresses are trivially spoofed. We're defining standards for physics-based location proofs — using latency measurements bounded by the speed of light to cryptographically verify where hardware physically sits. No trust assumptions. No IP lookups. Just physics.

speed-of-light bounds / latency attestation / data residency proofs / geolocation verification

Treaty Verification & AI Governance

International AI governance increasingly depends on whether compliance can be verified under real-world constraints of sovereignty, security, and commercial confidentiality. We bridge hardware-level verification mechanisms with policy design — synthesizing lessons from arms control, climate regimes, and trade law into a compositional framework for AI treaty verification.

Our concept paper, Toward Verifiable International AI Governance, proposes a primitive-level framework that decomposes treaty verification into its fundamental elements: constraint types, observable properties, mechanism families, evidence chain semantics, and confidence-versus-leakage tradeoffs.

treaty-technical co-design / verification primitives / compute governance / managed access protocols

AI Infrastructure Field Guide

Most people writing AI policy have never been inside a datacenter. Most people building datacenters aren't thinking about verifiability. We maintain a hands-on research facility at Equinix SV4 in Sunnyvale — a single-cabinet testbed for power monitoring, network verification, and physical audit research. Not a training cluster. A verification lab.

Access is free for researchers working on verification, compute governance, AI safety, or hardware trust. The only requirement: publish your findings under an open license.

Apply for Lab Access

Stay in the loop

We publish when we have something worth reading. No cadence, no filler. Leave your email and we'll send research updates as they ship.