Policy as Code
Lucid separates safety logic from infrastructure using a Policy-as-Code approach. This guide covers two levels of policy definition:
- Auditor Chain Configuration (
auditors.yaml) - Defines which auditors run and in what order - Lucid Policy Language (LPL) (
.policy.yaml) - Declarative rules for claim validation and compliance
Part 1: Auditor Chains (auditors.yaml)
You define your security guardrails in a high-level auditors.yaml manifest, and the Lucid CLI/Operator handles the injection.
š Manifest Schema
The auditors.yaml file defines a chain of auditors that will be injected into your secured pods.
chain:
- name: string # Unique name for the auditor instance
description: string # (Optional) Description of the auditor's purpose
image: string # The OCI image tag (must be published to Lucid Verifier)
port: integer # The internal port the auditor listens on (e.g., 8081)
env: # (Optional) Environment variables for the sidecar
KEY: VALUE
labels: # (Optional) Custom K8s labels for the sidecar container
key: value
āļø How Chaining Works
Auditors are executed sequentially in the order they appear in the chain list.
- Request Flow: Traffic enters the Pod ā Auditor 1 ā Auditor 2 ā ... ā AI Model.
- Short-Circuiting: If any auditor returns a
DENYdecision, the request is immediately blocked, and the model is never reached. - Redaction: Auditors can modify (redact) the payload before passing it to the next link in the chain.
š” Example: Multi-Stage Security
chain:
- name: lucid-guardrails-auditor
image: "lucid-guardrails-auditor:latest"
script: lucid-guardrails-auditor/main.py
port: 8090
description: "LLM Guard-based prompt injection and jailbreak detection"
env:
INJECTION_THRESHOLD: "0.8"
INJECTION_BLOCK_ON_DETECTION: "true"
- name: lucid-guardrails-auditor
image: "lucid-guardrails-auditor:latest"
script: lucid-guardrails-auditor/main.py
port: 8093
description: "Detoxify-based multi-label toxicity detection"
env:
TOXICITY_THRESHOLD: "0.7"
Usage with CLI
When deploying with the Lucid CLI, you specify the policy file using the --auditors flag:
Auditors configured: lucid-guardrails-auditor, lucid-guardrails-auditor
The CLI will: 1. Read the definitions. 2. Register them with the Lucid Verifier. 3. The Lucid Operator will then use this configuration to inject the sidecars during Pod creation.
Part 2: Lucid Policy Language (LPL)
While auditors.yaml defines which auditors run, LPL policies define what rules each auditor enforces. LPL is a declarative DSL for defining claim validation rules with RFC 9334 RATS compliance.
Why LPL?
| Before (Env Vars) | After (LPL Policy) |
|---|---|
ALLOWED_REGIONS=IN |
Formal rule with human-readable description |
| No claim validation | Required/optional claims with JSON Schema validation |
| Implicit enforcement | Explicit enforcement modes (block/warn/log/audit) |
| No compliance mapping | Direct mapping to regulations and controls |
| Config scattered in env | Single policy file, version controlled |
Policy Schema
# policies/dpdp-locality.policy.yaml
schema_version: "1.0.0"
policy_id: dpdp-locality-v1
version: "1.0.0"
name: "DPDP Data Locality Policy"
description: "Ensures data processing occurs within Indian jurisdiction per DPDP Act 2023"
verification_method: "Intel TDX attestation + Lucid landmark endorsement"
# What claims must the auditor produce?
required_claims:
- name: location.country
type: conformity
required: true
min_confidence: 0.8
- name: tee.attestation
type: security_finding
required: true
value_schema:
type: object
properties:
tee_type: { enum: ["TDX", "SEV-SNP", "NITRO"] }
quote: { type: string, minLength: 10 }
optional_claims:
- name: location.state
type: conformity
# Policy rules (evaluated in order)
rules:
- id: india-only
description: "Data must be processed within Indian jurisdiction"
condition: "claims['location.country'].value == 'IN'"
action: deny
message: "Processing location outside India violates DPDP Act"
- id: min-confidence
description: "Location verification must meet confidence threshold"
condition: "claims['location.verified'].confidence >= 0.8"
action: warn
message: "Location confidence below threshold"
- id: tee-required
description: "TEE attestation must be present"
condition: "claims['tee.attestation'].value.quote is not None"
action: deny
message: "Missing TEE attestation"
# How to handle violations
enforcement: block # block | warn | log | audit
# Compliance framework mapping
compliance_frameworks:
- dpdp
- rbi_localization
control_mappings:
dpdp: "Section 17 - Processing of personal data outside India"
rbi_localization: "2018 Circular - Storage of Payment System Data"
LPL Expression Syntax
Policy conditions use a safe expression language (no eval). Supported operations:
| Operation | Example |
|---|---|
| Equality | claims['location.country'].value == 'IN' |
| Comparison | claims['toxicity.score'].confidence >= 0.8 |
| Logical | claims['a'].value and claims['b'].value |
| None check | claims['tee.quote'].value is not None |
| Nested access | claims['result'].value['nested_key'] |
| Config access | claims['toxicity.score'].value < config.threshold |
Policy Config (Dynamic Thresholds)
Instead of hardcoding thresholds in rule conditions, you can define configuration values in the config section and reference them via config.* syntax:
# policies/toxicity-policy.yaml
schema_version: "1.0.0"
policy_id: toxicity-v1
name: "Toxicity Policy"
# Configuration values - can be updated without changing rules
config:
toxicity_threshold: 0.8
enable_pii_detection: true
model_version: "v2"
allowed_regions:
- US
- EU
- IN
rules:
- id: toxicity-check
description: "Block toxic content above threshold"
condition: "claims['toxicity.score'].value < config.toxicity_threshold"
action: deny
message: "Content toxicity exceeds configured threshold"
- id: region-check
description: "Verify request from allowed region"
condition: "claims['location.country'].value in config.allowed_regions"
action: deny
message: "Request from unauthorized region"
Benefits of PolicyConfig:
| Before (Hardcoded) | After (PolicyConfig) |
|---|---|
claims['score'].value < 0.8 |
claims['score'].value < config.threshold |
| Redeploy policy to change 0.8 ā 0.7 | Update config value, policy auto-refreshes |
| Settings scattered in rules | Single config section for all settings |
| No visibility into current settings | Config visible in Observer UI |
Using Policies in Auditors
from lucid_sdk import PolicyEngine, load_policy, create_auditor
# Load policy from YAML
policy = load_policy("policies/dpdp-locality.policy.yaml")
engine = PolicyEngine(policy)
builder = create_auditor(auditor_id="dpdp-auditor")
@builder.on_request
def check_locality(data: dict):
# Generate claims from your verification logic
claims = verify_location(data)
# Evaluate against policy
result = engine.evaluate(claims)
if result.decision == AuditDecision.DENY:
return Deny(
reason=engine.get_reason(),
metadata={
"policy_id": policy.policy_id,
"triggered_rules": [r.rule_id for r in result.rule_results if r.triggered]
}
)
return Proceed()
RATS-Compliant Appraisal
For RFC 9334 compliance, use the appraise_evidence() method which sets the EAR trust tier:
from lucid_schemas import Evidence
# Appraise Evidence and set trust_tier
appraised = engine.appraise_evidence(evidence)
print(f"Trust Tier: {appraised.trust_tier}") # AFFIRMING, WARNING, CONTRAINDICATED
# Access per-claim appraisal for visualization
for claim in appraised.appraisal_record['claim_appraisals']:
print(f"{claim['claim_name']}: {claim['status']}")
print(f" Actual: {claim['claim_value']}")
print(f" Expected: {claim['reference_value']} ({claim['reference_operator']})")
Enforcement Modes
| Mode | Behavior |
|---|---|
block |
Deny request if any rule with action: deny triggers |
warn |
Allow request but flag violation (returns WARN) |
log |
Silent logging, always proceeds |
audit |
Requires human review before proceeding |
shadow |
Evaluate policy but don't enforce (for testing/staging) |
Shadow Mode for Safe Rollouts
Use shadow mode to test new policies in production without affecting traffic. Claims are evaluated and logged, but decisions are not enforced. This is useful for validating policy behavior before switching to block.
Policy Bundles
Group multiple policies for deployment profiles:
# bundles/india-compliance.bundle.yaml
schema_version: "1.0.0"
bundle_id: india-compliance-bundle
name: "India AI Compliance Bundle"
policies:
- policy_id: dpdp-locality-v1
# ... full policy definition
- policy_id: rbi-data-v1
# ... full policy definition
# Rules that span multiple auditors
composite_rules:
- id: cross-auditor-check
description: "Both location and data residency must pass"
condition: "claims['location.verified'].value and claims['data.residency'].value"
action: proceed
message: "All compliance checks passed"
Per-Claim Compliance Tracking
After policy evaluation, each claim gets an appraisal record (EAR-compliant):
# The appraisal_record contains per-claim details
record = evidence.appraisal_record
# Summary stats
print(f"Claims Affirming: {record['claims_affirming']}")
print(f"Claims Contraindicated: {record['claims_contraindicated']}")
# Individual claim results
for claim in record['claim_appraisals']:
print(f"{claim['claim_name']}")
print(f" Status: {claim['status']}") # AFFIRMING, WARNING, CONTRAINDICATED
print(f" Value: {claim['claim_value']} vs Expected: {claim['reference_value']}")
print(f" Triggered Rules: {claim['triggered_rules']}")
print(f" Compliance: {claim['compliance_framework']} - {claim['control_id']}")
Part 3: Dynamic Policy Loading
Policies can be loaded dynamically from various sources without redeploying auditors. This enables centralized policy management and real-time updates.
Policy Sources
The SDK provides two built-in policy sources:
VerifierPolicySource
Fetches policies from the Verifier API endpoint:
from lucid_sdk.policy_source import VerifierPolicySource
# Fetch from Verifier API
source = VerifierPolicySource(
base_url="https://verifier.example.com/v1",
timeout=10.0,
api_key="optional-api-key" # or set LUCID_API_KEY env var
)
policy, version = source.fetch("my-auditor-id")
print(f"Loaded policy version: {version}")
FilePolicySource
Loads policies from local YAML files (useful for development/testing):
from lucid_sdk.policy_source import FilePolicySource
# Load from local file
source = FilePolicySource("/path/to/policy.yaml")
policy, version = source.fetch("my-auditor-id")
# Version is derived from file modification time
DynamicPolicyEngine
DynamicPolicyEngine wraps PolicyEngine with automatic policy refresh:
from lucid_sdk.policy_engine import DynamicPolicyEngine
from lucid_sdk.policy_source import VerifierPolicySource
source = VerifierPolicySource("https://verifier.example.com/v1")
engine = DynamicPolicyEngine(
source=source,
auditor_id="toxicity-auditor",
refresh_interval=60, # Check for updates every 60 seconds
max_stale_time=300, # Use stale policy for up to 5 minutes on failure
fail_closed=True # Deny if no policy available (safety default)
)
# Use like a regular PolicyEngine
result = engine.evaluate(claims)
# Check current policy version
print(f"Policy version: {engine.policy_version}")
print(f"Config threshold: {engine.config.toxicity_threshold}")
DynamicPolicyEngine Features
| Feature | Description |
|---|---|
| Auto-refresh | Polls source at configurable interval |
| Caching | Keeps last-known-good policy in memory |
| Graceful fallback | Uses stale policy on fetch failure (within max_stale_time) |
| Fail-closed | Denies requests when no policy available (configurable) |
| Version tracking | Exposes policy_version for observability |
Verifier API Endpoint
The Verifier exposes a policy lookup endpoint:
GET /v1/auditors/{auditor_id}/policy?public=true
The public=true query parameter allows unauthenticated access for auditors
fetching their policies dynamically.
Response:
{
"auditor_id": "toxicity-auditor",
"version": "2024-01-15T10:30:00Z",
"policy": {
"schema_version": "1.0.0",
"policy_id": "toxicity-v1",
"name": "Toxicity Policy",
"config": {
"toxicity_threshold": 0.8
},
"rules": [...]
}
}
Integration with Observer UI
With formal policies, the Observer dashboard can display policy details:
block-beta
columns 1
block:header["DPDP Data Locality Auditor"]:1
space
end
block:policy:1
A["Policy: DPDP Data Locality Policy v1.0.0"]
end
block:rules["RULES"]:1
B["1. Data must be processed within Indian jurisdiction\nā DENY if location.country ā 'IN'\n2. Location confidence must meet threshold\nā WARN if confidence < 0.8"]
end
block:enforcement:1
C["ENFORCEMENT: Block on violation"]
end
block:compliance["COMPLIANCE MAPPING"]:1
D["DPDP Act 2023 ā Section 17\nRBI Data Localization ā 2018 Circular"]
end