Your First Auditor
This tutorial walks you through the complete lifecycle of a Lucid Auditor: development, verification, publishing, and deployment.
Alpha Access Required
The Lucid SDK and CLI are available to alpha participants. Request access to get started.
The Scenario: Prompt Injection Detection
You need to ensure that no prompt injection attacks reach your AI model. You will build an auditor that intercepts requests and blocks those containing malicious injection attempts.
Step 1: Implement the Logic (SDK)
Create a file named main.py. We'll use the Lucid SDK to define a request-phase hook.
from lucid_sdk import AuditorApp, Proceed, Deny
# 1. Create the auditor app
app = AuditorApp("lucid-guardrails-auditor", port=8096)
# 2. Define the safety logic
@app.on_request
def check_injection(data: dict, config=None, lucid_context=None):
prompt = data.get("prompt", "")
# Scan for common injection patterns
injection_patterns = [
"ignore all previous instructions",
"disregard the above",
"system prompt:",
]
prompt_lower = prompt.lower()
for pattern in injection_patterns:
if pattern in prompt_lower:
return Deny(reason=f"Prompt injection detected: {pattern}")
return Proceed(safety_score=1.0)
# 3. Run the auditor
if __name__ == "__main__":
app.run()
Step 2: Containerize
Lucid Auditors run as sidecars. Create a Dockerfile:
FROM python:3.12-slim
# Create non-root user
RUN useradd -m -u 1001 appuser
USER appuser
WORKDIR /app
COPY --chown=appuser:appuser main.py .
RUN pip install --user lucid-sdk
# Required labels
LABEL io.lucid.auditor="true"
LABEL io.lucid.schema_version="1.0"
LABEL io.lucid.phase="request"
LABEL io.lucid.interfaces="health,audit"
CMD ["python", "main.py"]
Build the image:
Step 3: Verify Compliance (CLI)
Before deploying, ensure your container meets the Lucid Auditor Standard (correct labels and health endpoints).
[+] Compliance probe successful!
[*] Verification complete. Auditor is compliant.
Step 4: Sign & Notarize
Register your auditor's cryptographic digest with the Lucid Verifier. This ensures only your authorized code can run in the TEE.
Registering digest with Verifier...
[+] Auditor published and notarized.
Step 5: Define the Environment with Audit Chain
Create a file named my-env.yaml to define your agent with the custom auditor:
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: secure-agent
spec:
infrastructure:
provider: gcp
region: us-central1
agents:
- name: my-secure-agent
model:
id: meta-llama/Llama-3.3-70B
gpu:
type: H100
memory: 80GB
auditChain:
preRequest:
- auditorId: lucid-guardrails-auditor
name: Injection Detection
env:
INJECTION_THRESHOLD: "0.8"
INJECTION_BLOCK_ON_DETECTION: "true"
Step 6: Deploy
Deploy your environment:
Created: agent-abc123
[+] Environment deployed successfully!
lucid status my-secure-agentAgent: my-secure-agent
ID: agent-abc123
Status: running
Model: meta-llama/Llama-3.3-70B
GPU: H100
Results
- Sidecar Injection: The Lucid platform automatically injects your
lucid-guardrails-auditorinto the agent workload. - Enforcement: Every request to your model now passes through the injection detector.
- AI Passport: Every response includes a cryptographically signed passport proving the injection check was performed.
Hardware Attested: true
Auditors:
- lucid-guardrails-auditor: PROCEED (safety_score: 1.0)
Next Steps
- Auditor Development Guide - Learn advanced SDK patterns and lifecycle hooks
- Policy as Code - Define complex audit chains
- Production Checklist - Prepare for production deployment