Skip to content

EU AI Act Compliance Guide

This guide helps compliance officers configure Lucid to meet the requirements of the European Union Artificial Intelligence Act (EU AI Act) for high-risk AI systems.

Overview

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It establishes requirements for AI systems based on their risk level, with the most stringent requirements applying to "high-risk" AI systems. The regulation requires robust risk management, data governance, transparency, human oversight, accuracy, and cybersecurity.

Lucid helps organizations meet these requirements through:

  • Risk management via pre-deployment safety testing and ongoing monitoring
  • Robustness and cybersecurity through injection defense and security controls
  • Transparency and traceability via comprehensive logging and AI provenance
  • Human oversight enablement through explainable AI capabilities
  • Content marking for AI-generated synthetic content

Key EU AI Act Articles and Lucid Auditors

Article Requirement Recommended Auditor
Art. 9 Risk management system LLM Judge, LLM Judge (safety benchmarks)
Art. 10 Data and data governance LLM Judge (data classification), LLM Judge (bias)
Art. 12 Record-keeping (logging) AI Passport
Art. 13 Transparency and information LLM Judge (explainability)
Art. 14 Human oversight LLM Judge, AI Passport
Art. 15 Accuracy, robustness, cybersecurity LLM Judge Auditor, LLM Judge Auditor, LLM Judge
Art. 50 Synthetic content marking LLM Judge

High-Risk AI Classification

Before configuring Lucid, determine if your AI system is classified as high-risk under the EU AI Act. High-risk systems include AI used in:

  • Biometric identification
  • Critical infrastructure management
  • Education and vocational training
  • Employment and worker management
  • Access to essential services
  • Law enforcement
  • Migration, asylum, and border control
  • Administration of justice

If your system falls into these categories, you must comply with the full requirements of Articles 9-15.

Deploying for EU AI Act Compliance

Quick Start

Deploy an AI environment with the EU AI Act compliance profile:

lucid apply --model llama-3.1-8b --profile eu-ai-act

This enables the following auditors: - LLM Judge - Safety benchmarks and explainability - LLM Judge - Risk management and adversarial testing - LLM Judge - Bias detection - LLM Judge Auditor - Cybersecurity and robustness - LLM Judge Auditor - Model integrity verification - AI Passport - Automatic logging and traceability - LLM Judge - Synthetic content marking - LLM Judge - Data governance

Custom Configuration

For high-risk AI systems requiring comprehensive EU AI Act compliance:

# eu-ai-act-environment.yaml
apiVersion: lucid.io/v1alpha1
kind: LucidEnvironment
metadata:
  name: eu-ai-act-compliant
spec:
  infrastructure:
    provider: gcp
    region: europe-west1  # EU region
  agents:
    - name: high-risk-agent
      model:
        id: meta-llama/Llama-3.1-8B
      gpu:
        type: L4
        memory: 24GB
      auditorChain:
        preRequest:
          - auditorId: lucid-llm-judge-auditor
            name: Cybersecurity (Art. 15.3)
            env:
              INJECTION_BLOCK_ON_DETECTION: "true"
              INJECTION_THRESHOLD: "0.7"
          - auditorId: lucid-llm-judge-auditor
            name: EU AI Act Guardrails (Art. 5, 9, 10)
        postResponse:
          - auditorId: lucid-llm-judge-auditor
            name: Output Safety & Transparency (Art. 13, 50)

Deploy with:

lucid apply -f eu-ai-act-environment.yaml

Article-by-Article Guidance

Article 9: Risk Management System

Requirement: Establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle, including testing to ensure appropriate and targeted risk management measures.

Lucid Implementation:

  1. LLM Judge - Adversarial testing
  2. Pre-deployment safety benchmarks (WMDP, HarmBench)
  3. Red team testing to identify vulnerabilities

  4. LLM Judge - Safety benchmarks

  5. Ongoing model evaluation
  6. Performance metrics

  7. LLM Judge - Bias detection

  8. Bias detection to identify discrimination risks
env:
  SAFETY_BENCHMARKS_ENABLED: "true"
  RED_TEAM_TESTING_ENABLED: "true"
  WMDP_BENCHMARK: "true"
  HARMBENCH_ENABLED: "true"
  BIAS_DETECTION_ENABLED: "true"
  RISK_ASSESSMENT_INTERVAL: "weekly"

Documentation for Conformity Assessment: The LLM Judge and LLM Judge generate comprehensive reports of safety testing results that can be included in your technical documentation for conformity assessments.

Article 10: Data and Data Governance

Requirement: Training, validation, and testing datasets shall be subject to appropriate data governance practices, including examination for biases.

Lucid Implementation:

  1. LLM Judge - Data classification and governance
  2. Identifies data types in AI workflows
  3. Classifies sensitive information
  4. Supports data governance documentation

  5. LLM Judge - Bias examination

  6. Detects bias in model outputs
  7. Evaluates fairness across demographic groups
env:
  DATA_CLASSIFICATION_ENABLED: "true"
  BIAS_DETECTION_ENABLED: "true"
  FAIRNESS_METRICS: "demographic_parity,equalized_odds,calibration"

Article 12: Record-Keeping (Automatic Logging)

Requirement: High-risk AI systems shall technically allow for automatic recording of events (logs) over the lifetime of the system to ensure traceability.

Lucid Implementation:

  1. AI Passport - Automatic event logging
  2. Records all AI system events automatically
  3. Captures inputs, outputs, and intermediate steps
  4. Logs are cryptographically signed in TEE for integrity
  5. Supports the 10-year retention requirement
env:
  LOG_RETENTION_DAYS: "3650"  # 10 years per EU AI Act
  LOG_ALL_EVENTS: "true"
  LOG_MODEL_INPUTS: "true"
  LOG_MODEL_OUTPUTS: "true"
  TRACEABILITY_ENABLED: "true"
  LOG_TIMESTAMPS: "true"
  LOG_VERSION_INFO: "true"

Accessing Logs for Authorities:

# Export logs for market surveillance authorities
lucid passport export \
  --from 2025-01-01 \
  --to 2025-12-31 \
  --format json \
  --detailed > art12_logs.json

# Generate Article 12 compliance report
lucid passport export --compliance-report eu-ai-act-art12 --format pdf

Article 13: Transparency and Provision of Information

Requirement: High-risk AI systems shall be designed to operate with sufficient transparency to enable users to interpret outputs appropriately.

Lucid Implementation:

  1. LLM Judge - Explainability support
  2. Documents model capabilities and limitations
  3. Provides transparency into model behavior
  4. Supports user understanding of AI outputs

  5. AI Passport - Transparent processing record

  6. Documents which controls were applied
  7. Shows the processing pipeline clearly
env:
  EXPLAINABILITY_ENABLED: "true"
  DOCUMENT_CAPABILITIES: "true"
  DOCUMENT_LIMITATIONS: "true"
  USER_TRANSPARENCY_MODE: "true"

Article 14: Human Oversight

Requirement: High-risk AI systems shall be designed to allow effective human oversight, including the ability to correctly interpret outputs, understand capabilities and limitations, and intervene.

Lucid Implementation:

  1. AI Passport - Oversight dashboard
  2. Provides real-time visibility into AI operations
  3. Enables monitoring of all AI decisions
  4. Supports human intervention capabilities

  5. LLM Judge - Interpretability support

  6. Helps humans understand AI outputs
  7. Documents model behavior patterns
env:
  HUMAN_OVERSIGHT_MODE: "true"
  INTERVENTION_ENABLED: "true"
  ALERT_ON_HIGH_RISK_DECISIONS: "true"
  DASHBOARD_ENABLED: "true"

Observer Dashboard: Access the Lucid Observer dashboard for real-time human oversight at https://observer.lucid.sh.

Article 15: Accuracy, Robustness, and Cybersecurity

Requirement: High-risk AI systems shall achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle, and be resilient against attempts to exploit vulnerabilities.

Lucid Implementation:

  1. LLM Judge Auditor - Cybersecurity resilience (Art. 15.3)
  2. Defends against prompt injection attacks
  3. Blocks jailbreak attempts
  4. Protects against adversarial manipulation

  5. LLM Judge Auditor - Model integrity (Art. 15.2)

  6. Verifies model integrity

  7. LLM Judge - Accuracy and robustness (Art. 15.1-2)

  8. Monitors model accuracy metrics
  9. Runs adversarial robustness tests

  10. All Auditors in TEE - Hardware security

  11. All processing in hardware-secured enclaves
  12. Cryptographic attestation of security
env:
  # Cybersecurity (Art. 15.3)
  INJECTION_BLOCK_ON_DETECTION: "true"
  INJECTION_THRESHOLD: "0.7"
  JAILBREAK_DETECTION_ENABLED: "true"

  # Accuracy (Art. 15.1)
  ACCURACY_MONITORING: "true"
  PERFORMANCE_METRICS: "true"

  # Robustness (Art. 15.2)
  ADVERSARIAL_TESTING_ENABLED: "true"
  MODEL_INTEGRITY_CHECK: "true"

Article 50: Synthetic Content Marking

Requirement: Providers of AI systems generating synthetic content (audio, image, video, text) shall ensure outputs are marked in a machine-readable format and detectable as artificially generated.

Lucid Implementation:

  1. LLM Judge - AI content provenance
  2. Embeds machine-readable watermarks in AI outputs
  3. Enables detection of AI-generated content
  4. Provides provenance tracking with TEE attestation
env:
  WATERMARK_ENABLED: "true"
  WATERMARK_MACHINE_READABLE: "true"
  WATERMARK_DETECTABLE: "true"
  PROVENANCE_TRACKING: "true"
  C2PA_COMPATIBLE: "true"  # Content Authenticity Initiative

Verifying Watermarks:

# Check if content is watermarked
lucid watermark verify --content "AI generated text here"

# Export provenance certificate
lucid passport show <passport-id> --provenance

Evidence for Conformity Assessment

Required Technical Documentation

The EU AI Act requires extensive technical documentation. Lucid provides:

  1. Risk Management Documentation (Art. 9)
  2. Safety benchmark results
  3. Red team testing reports
  4. Bias evaluation results

  5. Data Governance Records (Art. 10)

  6. Data classification logs
  7. Bias examination records

  8. Automatic Logging (Art. 12)

  9. Complete event logs
  10. Traceability records
  11. 10-year retention capability

  12. Transparency Documentation (Art. 13)

  13. Model capability documentation
  14. Limitation disclosures
  15. Processing transparency records

  16. Cybersecurity Evidence (Art. 15)

  17. Security control attestations
  18. Blocked attack records
  19. Hardware attestation certificates

Generating Conformity Assessment Evidence

# Generate comprehensive EU AI Act documentation package
lucid passport export --compliance-report eu-ai-act --format pdf > eu_ai_act_evidence.pdf

# Export Article 12 automatic logs
lucid passport export --art12-logs --from 2025-01-01 > art12_logs.json

# Generate risk management report (Art. 9)
lucid eval report --risk-management > risk_management.pdf

# Export watermark provenance records (Art. 50)
lucid passport export --provenance --from 2025-01-01 > provenance_records.json

For Notified Bodies

When undergoing conformity assessment by a notified body, provide:

  1. AI Passports - Cryptographic proof of control enforcement
  2. Observability logs - Article 12 compliant event records
  3. Eval reports - Safety benchmark and risk assessment results
  4. Configuration documentation - Technical implementation details
  5. TEE attestations - Hardware-backed security evidence

Post-Market Monitoring

The EU AI Act requires ongoing monitoring after deployment. Lucid supports this through:

  1. Continuous monitoring via AI Passport
  2. Ongoing safety evaluation via LLM Judge
  3. Incident detection and reporting capabilities
# Set up continuous monitoring
lucid monitor --agent high-risk-agent --alerts

# Generate post-market monitoring report
lucid passport export --post-market-report --period monthly

AI Office Reporting

For serious incidents or market surveillance authority requests, export comprehensive evidence:

# Generate incident report
lucid incident report --incident-id INC-001 --format pdf

# Export for market surveillance authority
lucid passport export \
  --authority-request \
  --request-id AUTH-2024-001 \
  --format json

General-Purpose AI (GPAI) Considerations

If you are deploying foundation models or general-purpose AI with systemic risk, additional requirements apply:

env:
  # GPAI with systemic risk (Art. 52a)
  GPAI_SYSTEMIC_RISK_MODE: "true"
  MODEL_EVALUATION_COMPREHENSIVE: "true"
  RED_TEAM_ADVERSARIAL: "true"
  INCIDENT_REPORTING_ENABLED: "true"

Best Practices for EU AI Act Compliance

  1. Classify your AI system - Determine if it's high-risk before configuring
  2. Enable comprehensive logging - Article 12 requires automatic event recording
  3. Deploy in EU regions - Ensure data residency compliance
  4. Configure watermarking - Required for AI-generated content
  5. Retain logs for 10 years - EU AI Act retention requirement
  6. Conduct regular risk assessments - Use LLM Judge safety benchmarks
  7. Prepare conformity documentation - Maintain technical documentation package
  8. Enable human oversight - Ensure intervention capabilities exist

Timeline Considerations

The EU AI Act has phased implementation: - February 2025: Prohibited AI practices take effect - August 2025: GPAI requirements take effect - August 2026: High-risk AI requirements take effect

Configure Lucid now to ensure compliance by the relevant deadlines.