EU AI Act Compliance Guide
This guide helps compliance officers configure Lucid to meet the requirements of the European Union Artificial Intelligence Act (EU AI Act) for high-risk AI systems.
Overview
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It establishes requirements for AI systems based on their risk level, with the most stringent requirements applying to "high-risk" AI systems. The regulation requires robust risk management, data governance, transparency, human oversight, accuracy, and cybersecurity.
Lucid helps organizations meet these requirements through:
- Risk management via pre-deployment safety testing and ongoing monitoring
- Robustness and cybersecurity through injection defense and security controls
- Transparency and traceability via comprehensive logging and AI provenance
- Human oversight enablement through explainable AI capabilities
- Content marking for AI-generated synthetic content
Key EU AI Act Articles and Lucid Auditors
| Article | Requirement | Recommended Auditor |
|---|---|---|
| Art. 9 | Risk management system | Red Team Auditor, Eval Auditor (safety benchmarks) |
| Art. 10 | Data and data governance | PII Compliance Auditor (data classification), Fairness Auditor (bias) |
| Art. 12 | Record-keeping (logging) | Observability Auditor |
| Art. 13 | Transparency and information | Eval Auditor (explainability) |
| Art. 14 | Human oversight | Eval Auditor, Observability Auditor |
| Art. 15 | Accuracy, robustness, cybersecurity | Guardrails Auditor, Model Security Auditor, Eval Auditor |
| Art. 50 | Synthetic content marking | Watermark Auditor |
High-Risk AI Classification
Before configuring Lucid, determine if your AI system is classified as high-risk under the EU AI Act. High-risk systems include AI used in:
- Biometric identification
- Critical infrastructure management
- Education and vocational training
- Employment and worker management
- Access to essential services
- Law enforcement
- Migration, asylum, and border control
- Administration of justice
If your system falls into these categories, you must comply with the full requirements of Articles 9-15.
Deploying for EU AI Act Compliance
Quick Start
Deploy an AI environment with the EU AI Act compliance profile:
lucid apply --app open-webui --model llama-3.1-8b --profile eu-ai-act
This enables the following auditors: - Eval Auditor - Safety benchmarks and explainability - Red Team Auditor - Risk management and adversarial testing - Fairness Auditor - Bias detection - Guardrails Auditor - Cybersecurity and robustness - Model Security Auditor - Model integrity verification - Observability Auditor - Automatic logging and traceability - Watermark Auditor - Synthetic content marking - PII Compliance Auditor - Data governance
Custom Configuration
For high-risk AI systems requiring comprehensive EU AI Act compliance:
# eu-ai-act-environment.yaml
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
name: eu-ai-act-compliant
spec:
infrastructure:
provider: gcp
region: europe-west1 # EU region
agents:
- name: high-risk-agent
model:
id: meta-llama/Llama-3.1-8B
gpu:
type: L4
memory: 24GB
auditChain:
preDeploy:
- auditorId: lucid-red-team-auditor
name: Risk Management (Art. 9)
env:
RED_TEAM_TESTING_ENABLED: "true"
WMDP_BENCHMARK: "true"
HARMBENCH_ENABLED: "true"
- auditorId: lucid-eval-auditor
name: Safety Benchmarks (Art. 9)
env:
SAFETY_BENCHMARKS_ENABLED: "true"
- auditorId: lucid-fairness-auditor
name: Bias Detection (Art. 10)
env:
BIAS_DETECTION_ENABLED: "true"
- auditorId: lucid-model-security-auditor
name: Model Integrity (Art. 15)
env:
MODEL_INTEGRITY_CHECK: "true"
preRequest:
- auditorId: lucid-guardrails-auditor
name: Cybersecurity (Art. 15.3)
env:
INJECTION_BLOCK_ON_DETECTION: "true"
INJECTION_THRESHOLD: "0.7"
JAILBREAK_DETECTION_ENABLED: "true"
- auditorId: lucid-pii-compliance-auditor
name: Data Governance (Art. 10)
env:
PII_DETECTION_ENABLED: "true"
DATA_CLASSIFICATION_ENABLED: "true"
postResponse:
- auditorId: lucid-watermark-auditor
name: Synthetic Content Marking (Art. 50)
env:
WATERMARK_ENABLED: "true"
WATERMARK_DETECTABLE: "true"
PROVENANCE_TRACKING: "true"
- auditorId: lucid-observability-auditor
name: Automatic Logging (Art. 12)
env:
LOG_RETENTION_DAYS: "3650" # 10 years
LOG_ALL_EVENTS: "true"
TRACEABILITY_ENABLED: "true"
LOG_MODEL_INPUTS: "true"
LOG_MODEL_OUTPUTS: "true"
- auditorId: lucid-eval-auditor
name: Accuracy Monitoring (Art. 15)
env:
ACCURACY_MONITORING: "true"
PERFORMANCE_METRICS: "true"
Deploy with:
lucid apply -f eu-ai-act-environment.yaml
Article-by-Article Guidance
Article 9: Risk Management System
Requirement: Establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle, including testing to ensure appropriate and targeted risk management measures.
Lucid Implementation:
- Red Team Auditor - Adversarial testing
- Pre-deployment safety benchmarks (WMDP, HarmBench)
-
Red team testing to identify vulnerabilities
-
Eval Auditor - Safety benchmarks
- Ongoing model evaluation
-
Performance metrics
-
Fairness Auditor - Bias detection
- Bias detection to identify discrimination risks
env:
SAFETY_BENCHMARKS_ENABLED: "true"
RED_TEAM_TESTING_ENABLED: "true"
WMDP_BENCHMARK: "true"
HARMBENCH_ENABLED: "true"
BIAS_DETECTION_ENABLED: "true"
RISK_ASSESSMENT_INTERVAL: "weekly"
Documentation for Conformity Assessment: The Red Team Auditor and Eval Auditor generate comprehensive reports of safety testing results that can be included in your technical documentation for conformity assessments.
Article 10: Data and Data Governance
Requirement: Training, validation, and testing datasets shall be subject to appropriate data governance practices, including examination for biases.
Lucid Implementation:
- PII Compliance Auditor - Data classification and governance
- Identifies data types in AI workflows
- Classifies sensitive information
-
Supports data governance documentation
-
Fairness Auditor - Bias examination
- Detects bias in model outputs
- Evaluates fairness across demographic groups
env:
DATA_CLASSIFICATION_ENABLED: "true"
BIAS_DETECTION_ENABLED: "true"
FAIRNESS_METRICS: "demographic_parity,equalized_odds,calibration"
Article 12: Record-Keeping (Automatic Logging)
Requirement: High-risk AI systems shall technically allow for automatic recording of events (logs) over the lifetime of the system to ensure traceability.
Lucid Implementation:
- Observability Auditor - Automatic event logging
- Records all AI system events automatically
- Captures inputs, outputs, and intermediate steps
- Logs are cryptographically signed in TEE for integrity
- Supports the 10-year retention requirement
env:
LOG_RETENTION_DAYS: "3650" # 10 years per EU AI Act
LOG_ALL_EVENTS: "true"
LOG_MODEL_INPUTS: "true"
LOG_MODEL_OUTPUTS: "true"
TRACEABILITY_ENABLED: "true"
LOG_TIMESTAMPS: "true"
LOG_VERSION_INFO: "true"
Accessing Logs for Authorities:
# Export logs for market surveillance authorities
lucid passport export \
--from 2024-01-01 \
--to 2024-12-31 \
--format json \
--detailed > art12_logs.json
# Generate Article 12 compliance report
lucid passport export --compliance-report eu-ai-act-art12 --format pdf
Article 13: Transparency and Provision of Information
Requirement: High-risk AI systems shall be designed to operate with sufficient transparency to enable users to interpret outputs appropriately.
Lucid Implementation:
- Eval Auditor - Explainability support
- Documents model capabilities and limitations
- Provides transparency into model behavior
-
Supports user understanding of AI outputs
-
AI Passport - Transparent processing record
- Documents which controls were applied
- Shows the processing pipeline clearly
env:
EXPLAINABILITY_ENABLED: "true"
DOCUMENT_CAPABILITIES: "true"
DOCUMENT_LIMITATIONS: "true"
USER_TRANSPARENCY_MODE: "true"
Article 14: Human Oversight
Requirement: High-risk AI systems shall be designed to allow effective human oversight, including the ability to correctly interpret outputs, understand capabilities and limitations, and intervene.
Lucid Implementation:
- Observability Auditor - Oversight dashboard
- Provides real-time visibility into AI operations
- Enables monitoring of all AI decisions
-
Supports human intervention capabilities
-
Eval Auditor - Interpretability support
- Helps humans understand AI outputs
- Documents model behavior patterns
env:
HUMAN_OVERSIGHT_MODE: "true"
INTERVENTION_ENABLED: "true"
ALERT_ON_HIGH_RISK_DECISIONS: "true"
DASHBOARD_ENABLED: "true"
Observer Dashboard: Access the Lucid Observer dashboard for real-time human oversight at https://observer.lucid.sh.
Article 15: Accuracy, Robustness, and Cybersecurity
Requirement: High-risk AI systems shall achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, and be resilient against attempts to exploit vulnerabilities.
Lucid Implementation:
- Guardrails Auditor - Cybersecurity resilience (Art. 15.3)
- Defends against prompt injection attacks
- Blocks jailbreak attempts
-
Protects against adversarial manipulation
-
Model Security Auditor - Model integrity (Art. 15.2)
-
Verifies model integrity
-
Eval Auditor - Accuracy and robustness (Art. 15.1-2)
- Monitors model accuracy metrics
-
Runs adversarial robustness tests
-
All Auditors in TEE - Hardware security
- All processing in hardware-secured enclaves
- Cryptographic attestation of security
env:
# Cybersecurity (Art. 15.3)
INJECTION_BLOCK_ON_DETECTION: "true"
INJECTION_THRESHOLD: "0.7"
JAILBREAK_DETECTION_ENABLED: "true"
# Accuracy (Art. 15.1)
ACCURACY_MONITORING: "true"
PERFORMANCE_METRICS: "true"
# Robustness (Art. 15.2)
ADVERSARIAL_TESTING_ENABLED: "true"
MODEL_INTEGRITY_CHECK: "true"
Article 50: Synthetic Content Marking
Requirement: Providers of AI systems generating synthetic content (audio, image, video, text) shall ensure outputs are marked in a machine-readable format and detectable as artificially generated.
Lucid Implementation:
- Watermark Auditor - AI content provenance
- Embeds machine-readable watermarks in AI outputs
- Enables detection of AI-generated content
- Provides provenance tracking with TEE attestation
env:
WATERMARK_ENABLED: "true"
WATERMARK_MACHINE_READABLE: "true"
WATERMARK_DETECTABLE: "true"
PROVENANCE_TRACKING: "true"
C2PA_COMPATIBLE: "true" # Content Authenticity Initiative
Verifying Watermarks:
# Check if content is watermarked
lucid watermark verify --content "AI generated text here"
# Export provenance certificate
lucid passport show <passport-id> --provenance
Evidence for Conformity Assessment
Required Technical Documentation
The EU AI Act requires extensive technical documentation. Lucid provides:
- Risk Management Documentation (Art. 9)
- Safety benchmark results
- Red team testing reports
-
Bias evaluation results
-
Data Governance Records (Art. 10)
- Data classification logs
-
Bias examination records
-
Automatic Logging (Art. 12)
- Complete event logs
- Traceability records
-
10-year retention capability
-
Transparency Documentation (Art. 13)
- Model capability documentation
- Limitation disclosures
-
Processing transparency records
-
Cybersecurity Evidence (Art. 15)
- Security control attestations
- Blocked attack records
- Hardware attestation certificates
Generating Conformity Assessment Evidence
# Generate comprehensive EU AI Act documentation package
lucid passport export --compliance-report eu-ai-act --format pdf > eu_ai_act_evidence.pdf
# Export Article 12 automatic logs
lucid passport export --art12-logs --from 2024-01-01 > art12_logs.json
# Generate risk management report (Art. 9)
lucid eval report --risk-management > risk_management.pdf
# Export watermark provenance records (Art. 50)
lucid passport export --provenance --from 2024-01-01 > provenance_records.json
For Notified Bodies
When undergoing conformity assessment by a notified body, provide:
- AI Passports - Cryptographic proof of control enforcement
- Observability logs - Article 12 compliant event records
- Eval reports - Safety benchmark and risk assessment results
- Configuration documentation - Technical implementation details
- TEE attestations - Hardware-backed security evidence
Post-Market Monitoring
The EU AI Act requires ongoing monitoring after deployment. Lucid supports this through:
- Continuous monitoring via Observability Auditor
- Ongoing safety evaluation via Eval Auditor
- Incident detection and reporting capabilities
# Set up continuous monitoring
lucid monitor --agent high-risk-agent --alerts
# Generate post-market monitoring report
lucid passport export --post-market-report --period monthly
AI Office Reporting
For serious incidents or market surveillance authority requests, export comprehensive evidence:
# Generate incident report
lucid incident report --incident-id INC-001 --format pdf
# Export for market surveillance authority
lucid passport export \
--authority-request \
--request-id AUTH-2024-001 \
--format json
General-Purpose AI (GPAI) Considerations
If you are deploying foundation models or general-purpose AI with systemic risk, additional requirements apply:
env:
# GPAI with systemic risk (Art. 52a)
GPAI_SYSTEMIC_RISK_MODE: "true"
MODEL_EVALUATION_COMPREHENSIVE: "true"
RED_TEAM_ADVERSARIAL: "true"
INCIDENT_REPORTING_ENABLED: "true"
Best Practices for EU AI Act Compliance
- Classify your AI system - Determine if it's high-risk before configuring
- Enable comprehensive logging - Article 12 requires automatic event recording
- Deploy in EU regions - Ensure data residency compliance
- Configure watermarking - Required for AI-generated content
- Retain logs for 10 years - EU AI Act retention requirement
- Conduct regular risk assessments - Use Eval Auditor safety benchmarks
- Prepare conformity documentation - Maintain technical documentation package
- Enable human oversight - Ensure intervention capabilities exist
Timeline Considerations
The EU AI Act has phased implementation: - February 2025: Prohibited AI practices take effect - August 2025: GPAI requirements take effect - August 2026: High-risk AI requirements take effect
Configure Lucid now to ensure compliance by the relevant deadlines.
Related Resources
- Auditor Catalog - Detailed EU AI Act control mappings
- Policy as Code - Custom compliance rules
- GDPR Compliance Guide - Complementary EU data protection requirements
- SOC 2 Compliance Guide - Complementary service organization controls