Skip to content

Auditor Interface Specification

Version: 0.1.0-alpha Status: Final Last Updated: January 2026

Overview

This document defines the standard interface that all Lucid auditors must implement to participate in the verification chain. It enables:

  • Loose coupling between verifier and auditors
  • Dynamic auditor discovery and registration
  • RFC 9334 (RATS) compliant evidence format with Claims and Evidence
  • Consistent error handling across the chain

Auditor Lifecycle

flowchart TD
    subgraph Lifecycle["AUDITOR LIFECYCLE"]
        A["1. INITIALIZATION"] --> A1["Load config, warmup models, register with verifier"]
        A1 --> B["2. ANALYSIS"]
        B --> B1["Receive data, analyze, produce Claims"]
        B1 --> C["3. EVIDENCE SUBMISSION"]
        C --> C1["Bundle Claims into Evidence, sign, submit to verifier"]
        C1 --> D["4. CHAIN FORWARDING (optional)"]
        D --> D1["Forward to next auditor in chain"]
    end

HTTP API Specification

Required Endpoints

POST /audit

Main endpoint for receiving data to audit.

Request:

{
  "data": {
    "input": "User prompt or request",
    "output": "Model response (optional)",
    "metadata": {
      "model_id": "gpt-4",
      "session_id": "sess-123",
      "user_id": "user-456"
    }
  },
  "lucid_context": {
    "trace_id": "trace-789",
    "chain_position": 1,
    "chain_length": 3,
    "verifier_url": "https://verifier.lucid.example.com",
    "previous_auditors": ["injection-auditor"]
  }
}

Response (Success):

{
  "status": "success",
  "evidence": [
    {
      "evidence_id": "ev-tox-abc123",
      "attester_id": "toxicity-auditor",
      "attester_type": "auditor",
      "claims": [
        {
          "name": "toxicity-auditor",
          "type": "safety_score",
          "value": {
            "toxicity_score": 0.12,
            "categories": {
              "threat": 0.05,
              "insult": 0.08,
              "obscene": 0.03
            },
            "blocked": false
          },
          "timestamp": "2026-01-19T12:00:00Z",
          "confidence": 0.95
        }
      ],
      "phase": "response",
      "generated_at": "2026-01-19T12:00:00Z",
      "signature": "tee-sig-abc123..."
    }
  ],
  "forward_to_next": true,
  "modified_data": null
}

Response (Block):

{
  "status": "blocked",
  "reason": "Toxicity threshold exceeded",
  "evidence": [...],
  "forward_to_next": false
}

GET /health

Health check endpoint.

Response:

{
  "status": "healthy",
  "auditor_id": "toxicity-auditor-v1",
  "version": "1.2.3",
  "capabilities": ["text_analysis", "multilingual"],
  "ready": true
}

GET /capabilities

Auditor capability discovery.

Response:

{
  "auditor_id": "toxicity-auditor-v1",
  "auditor_type": "safety",
  "version": "1.2.3",
  "schema_version": "2.0.0",
  "capabilities": {
    "input_types": ["text", "structured"],
    "output_types": ["evidence", "block", "passthrough"],
    "features": ["multilingual", "batch_processing"],
    "max_input_length": 32768,
    "supported_models": ["*"]
  },
  "claim_schemas": {
    "safety_score": {
      "type": "object",
      "properties": {
        "toxicity_score": {"type": "number", "minimum": 0, "maximum": 1},
        "categories": {"type": "object"},
        "blocked": {"type": "boolean"}
      },
      "required": ["toxicity_score", "blocked"]
    }
  },
  "configuration": {
    "threshold": {
      "type": "number",
      "default": 0.7,
      "env": "TOXICITY_THRESHOLD"
    }
  }
}

Optional Endpoints

POST /batch

Batch processing for multiple inputs.

{
  "items": [
    {"data": {...}, "lucid_context": {...}},
    {"data": {...}, "lucid_context": {...}}
  ]
}

GET /metrics

Prometheus-compatible metrics endpoint.

# HELP auditor_requests_total Total audit requests
# TYPE auditor_requests_total counter
auditor_requests_total{status="success"} 1234
auditor_requests_total{status="blocked"} 56
auditor_requests_total{status="error"} 7

# HELP auditor_latency_seconds Audit latency
# TYPE auditor_latency_seconds histogram
auditor_latency_seconds_bucket{le="0.1"} 1000
auditor_latency_seconds_bucket{le="0.5"} 1200

Evidence & Claim Schema (RFC 9334 RATS)

Claim Structure (Unsigned Assertion)

interface Claim {
  // Claim name (e.g., auditor identifier)
  name: string;

  // Claim type identifier (e.g., "safety_score", "injection_detection")
  type: ClaimType;

  // Claim value (schema depends on type)
  value: Record<string, any>;

  // ISO 8601 timestamp
  timestamp: string;

  // Confidence score (0.0 to 1.0)
  confidence?: number;

  // Execution phase
  phase?: string;

  // Compliance mapping (optional)
  compliance_framework?: string;
  control_id?: string;
}

Evidence Structure (Signed Container)

interface Evidence {
  // Schema version
  schema_version: string;  // "2.0.0"

  // Unique evidence identifier
  evidence_id: string;

  // Attester identification
  attester_id: string;     // e.g., "lucid-guardrails-auditor"
  attester_type: EvidenceSource;  // "auditor", "tee", "verifier"

  // Claims bundle (one or more Claims)
  claims: Claim[];

  // Execution phase
  phase: string;  // "request", "response", "artifact", "runtime"

  // Generation timestamp
  generated_at: string;

  // Single signature covering ALL claims
  signature: string;

  // Trust assessment (filled by Verifier)
  trust_tier?: TrustTier;

  // Optional ZK proof
  zk_proof?: ZKProofSchema;
}

Standard Claim Types

Type Description Value Schema
safety_score Safety/toxicity analysis {score: number, categories: object, blocked: boolean}
injection_detection Prompt injection check {detected: boolean, confidence: number, patterns: string[]}
compliance_check Regulatory compliance {compliant: boolean, violations: string[], framework: string}
performance_metric Performance measurements {latency_ms: number, tokens: number, cost: number}
evaluation_result Benchmark evaluation {benchmark: string, score: number, passed: boolean}
sovereignty_check Data residency verification {location: string, compliant: boolean, jurisdiction: string}
observability_trace Tracing information {trace_id: string, spans: object[], duration_ms: number}

Error Handling

Error Response Format

{
  "status": "error",
  "error": {
    "code": "AUDITOR_TIMEOUT",
    "message": "Analysis timed out after 30 seconds",
    "retryable": true,
    "details": {
      "timeout_ms": 30000,
      "partial_results": true
    }
  },
  "evidence": [],
  "forward_to_next": true
}

Standard Error Codes

Code Description Retryable
AUDITOR_TIMEOUT Processing timeout Yes
AUDITOR_OVERLOAD Rate limit exceeded Yes
INVALID_INPUT Malformed request No
UNSUPPORTED_MODEL Model not supported No
INTERNAL_ERROR Internal auditor error Yes
TEE_ATTESTATION_FAILED TEE verification failed No

Chain Forwarding Protocol

Context Propagation

Each auditor must forward the lucid_context to the next auditor with updates:

{
  "lucid_context": {
    "trace_id": "trace-789",
    "chain_position": 2,  // Incremented
    "chain_length": 3,
    "verifier_url": "https://verifier.lucid.example.com",
    "previous_auditors": ["injection-auditor", "toxicity-auditor"]  // Appended
  }
}

Chain Termination

An auditor can terminate the chain by: 1. Setting "forward_to_next": false in response 2. Returning "status": "blocked"

The verifier will still receive measurements from completed auditors.

Registration Protocol

Auditor Self-Registration

On startup, auditors should register with the verifier:

POST /v1/auditors/register
Content-Type: application/json

{
  "auditor_id": "toxicity-auditor-v1",
  "endpoint": "http://toxicity-auditor:8080",
  "capabilities": {...},
  "health_check_interval": 30
}

Verifier Discovery Response

{
  "registered": true,
  "auditor_id": "toxicity-auditor-v1",
  "chain_position": 2,
  "next_auditor": "http://eval-auditor:8080"
}

Security Requirements

TEE Attestation

All auditors must: 1. Run in a TEE environment (or MOCK mode for development) 2. Sign measurements using TEE attestation 3. Include attestation evidence in auditor_signature

Signature Format

{
  "auditor_signature": {
    "tee": "COCO",
    "evidence": "base64-encoded-attestation",
    "runtime_data_hash": "sha256:abc123...",
    "timestamp": "2026-01-19T12:00:00Z"
  }
}

Configuration

Environment Variables

All auditors should support these standard variables:

Variable Description Required
LUCID_VERIFIER_URL Verifier service URL Yes
MODEL_ID Target model identifier Yes
AUDITOR_URL Next auditor in chain No
HTTP_TIMEOUT Request timeout (seconds) No
HTTP_CHAIN_TIMEOUT Chain forward timeout No
TEE_PROVIDER TEE provider (COCO, MOCK) Yes

Versioning

Schema Versioning

Evidence and Claim schemas use semantic versioning: - Major: Breaking changes (e.g., v1.x Measurement → v2.x Evidence/Claim) - Minor: New optional fields added - Patch: Documentation/description changes

Backward Compatibility

The verifier maintains compatibility with: - Current schema version (v2.x Evidence/Claim model) - Previous major version deprecated (v1.x Measurement model removed)

Example Implementations

Minimal Python Auditor

from fastapi import FastAPI
from lucid_sdk import LucidClient
from lucid_schemas import ClaimType

app = FastAPI()
client = LucidClient()

@app.post("/audit")
async def audit(request: AuditRequest):
    # Perform analysis
    result = analyze(request.data)

    # Create claim (unsigned assertion)
    claim = client.create_claim(
        name="my-auditor",
        type=ClaimType.safety_score,
        value={"result": result, "score": 0.95, "passed": True},
        phase="response",
    )

    # Bundle claims into signed evidence
    evidence = client.create_evidence(
        claims=[claim],
        phase="response",
    )

    # Submit to verifier
    await client.submit_evidence(
        model_id=request.lucid_context.get("model_id"),
        evidence=[evidence],
    )

    return {"status": "success", "evidence": [evidence]}

Changelog

v1.0.0 (January 2026)

  • Initial specification
  • Standard measurement types
  • Chain forwarding protocol
  • Registration protocol