ClaimsAuditor Interface Specification
Version: 2.0.0 Status: Final Last Updated: February 2026
Overview
This document defines the standard interface that all Lucid ClaimsAuditors must implement. ClaimsAuditors are observation-only components that produce claims (structured observations about AI traffic). They do not make enforcement decisions.
Enforcement is handled exclusively by the Gateway (the Policy Enforcement Point), which collects claims from all auditors, evaluates a single Cedar policy, and produces one Evidence bundle per request.
This architecture enables:
- Clean separation between observation and enforcement
- Dynamic policy updates without redeploying auditors
- Unified Cedar policy language across the entire stack
- RFC 9334 (RATS) compliant evidence format
ClaimsAuditor Lifecycle
flowchart TD
subgraph Lifecycle["CLAIMSAUDITOR LIFECYCLE"]
A["1. INITIALIZATION"] --> A1["Load config, warmup models, register vocabulary"]
A1 --> B["2. OBSERVATION"]
B --> B1["Receive data via /claims, analyze, produce claims"]
B1 --> C["3. CLAIMS RETURN"]
C --> C1["Return structured claims to Gateway"]
C1 --> D["4. (Gateway handles enforcement)"]
end
HTTP API Specification
Required Endpoints
POST /claims
Main endpoint for receiving data and returning claims (observations). The auditor analyzes the input and returns structured claims. It does not make allow/deny decisions.
Request:
{
"data": {
"input": "User prompt or request",
"output": "Model response (optional)",
"metadata": {
"model_id": "gpt-4",
"session_id": "sess-123",
"user_id": "user-456"
}
},
"phase": "request",
"lucid_context": {
"trace_id": "trace-789",
"agent_id": "agent-abc",
"workspace_id": "ws-123",
"auditor_config": {}
}
}
Response:
{
"status": "success",
"claims": [
{
"name": "toxic_content",
"type": "score_normalized",
"value": 0.12,
"metadata": {
"categories": {
"threat": 0.05,
"insult": 0.08,
"obscene": 0.03
}
},
"timestamp": "2026-02-17T12:00:00Z",
"confidence": 0.95
}
]
}
Note: There is no "blocked" response. ClaimsAuditors only return observations. The Gateway decides whether to block based on Cedar policy evaluation.
GET /health
Health check endpoint.
Response:
{
"status": "healthy",
"auditor_id": "toxicity-auditor-v1",
"version": "1.2.3",
"ready": true
}
GET /vocabulary
Declares the claim names and types this auditor can produce. Used by the Gateway for schema validation and by the policy editor for autocomplete.
Response:
{
"auditor_id": "toxicity-auditor-v1",
"version": "1.2.3",
"vocabulary": [
{
"name": "toxic_content",
"type": "score_normalized",
"description": "Overall toxicity score (0.0 = safe, 1.0 = toxic)",
"value_schema": {
"type": "number",
"minimum": 0,
"maximum": 1
}
}
],
"phases": ["request", "response"],
"configuration": {
"model_name": {
"type": "string",
"default": "unitary/toxic-bert",
"description": "Toxicity model to use"
}
}
}
Optional Endpoints
POST /batch
Batch processing for multiple inputs.
{
"items": [
{"data": {...}, "phase": "request", "lucid_context": {...}},
{"data": {...}, "phase": "response", "lucid_context": {...}}
]
}
GET /metrics
Prometheus-compatible metrics endpoint.
# HELP auditor_claims_total Total claims produced
# TYPE auditor_claims_total counter
auditor_claims_total{claim_name="toxic_content"} 1234
# HELP auditor_latency_seconds Claim production latency
# TYPE auditor_latency_seconds histogram
auditor_latency_seconds_bucket{le="0.1"} 1000
auditor_latency_seconds_bucket{le="0.5"} 1200
Claim Schema
Claim Structure
interface Claim {
// Claim name from the auditor's vocabulary (e.g., "toxic_content")
name: string;
// Claim type identifier (e.g., "score_normalized", "boolean", "string_list")
type: ClaimType;
// Claim value (schema depends on type)
value: any;
// Additional structured metadata
metadata?: Record<string, any>;
// ISO 8601 timestamp
timestamp: string;
// Confidence score (0.0 to 1.0)
confidence?: number;
}
Standard Claim Types
| Type | Description | Value Schema |
|---|---|---|
score_normalized |
Normalized score (0.0 to 1.0) | number |
boolean |
True/false observation | boolean |
string_list |
List of string labels | string[] |
object |
Structured observation | Record<string, any> |
count |
Integer count | integer |
duration_ms |
Duration in milliseconds | number |
Evidence Structure (Gateway Only)
The Gateway bundles claims from all auditors into a signed Evidence container after Cedar policy evaluation. Individual auditors do not produce Evidence.
interface Evidence {
// Schema version
schema_version: string; // "2.0.0"
// Unique evidence identifier
evidence_id: string;
// Gateway identification
attester_id: string; // "lucid-gateway"
attester_type: "gateway";
// Claims from all auditors
claims: Claim[];
// Cedar policy evaluation result
decision: "allow" | "deny";
decision_reasons: string[];
// Cedar policy reference
policy_id: string;
policy_version: string;
// Execution phase
phase: string;
// Generation timestamp
generated_at: string;
// Signature covering all claims and decision
signature: string;
// Trust assessment (filled by Verifier)
trust_tier?: TrustTier;
}
Error Handling
Error Response Format
{
"status": "error",
"error": {
"code": "AUDITOR_TIMEOUT",
"message": "Analysis timed out after 30 seconds",
"retryable": true,
"details": {
"timeout_ms": 30000,
"partial_claims": []
}
},
"claims": []
}
Standard Error Codes
| Code | Description | Retryable |
|---|---|---|
AUDITOR_TIMEOUT |
Processing timeout | Yes |
AUDITOR_OVERLOAD |
Rate limit exceeded | Yes |
INVALID_INPUT |
Malformed request | No |
UNSUPPORTED_MODEL |
Model not supported | No |
INTERNAL_ERROR |
Internal auditor error | Yes |
TEE_ATTESTATION_FAILED |
TEE verification failed | No |
Registration Protocol
Auditor Self-Registration
On startup, auditors register with the Gateway, advertising their vocabulary:
POST /v1/auditors/register
Content-Type: application/json
{
"auditor_id": "toxicity-auditor-v1",
"endpoint": "http://toxicity-auditor:8080",
"vocabulary_url": "http://toxicity-auditor:8080/vocabulary",
"health_check_interval": 30
}
Gateway Discovery Response
{
"registered": true,
"auditor_id": "toxicity-auditor-v1",
"vocabulary_synced": true
}
Security Requirements
TEE Attestation
All auditors must: 1. Run in a TEE environment (or MOCK mode for development) 2. Provide attestation evidence on registration 3. Claims are unsigned -- the Gateway signs the full Evidence bundle
Signature Model
Individual auditors do not sign their claims. The Gateway: 1. Collects claims from all auditors 2. Evaluates the Cedar policy 3. Bundles claims + decision into Evidence 4. Signs the Evidence through the Attestation Agent
This ensures a single signature covers the complete decision context.
Configuration
Environment Variables
All auditors should support these standard variables:
| Variable | Description | Required |
|---|---|---|
LUCID_GATEWAY_URL |
Gateway service URL | Yes |
MODEL_ID |
Target model identifier | No |
HTTP_TIMEOUT |
Request timeout (seconds) | No |
TEE_PROVIDER |
TEE provider (COCO, MOCK) | Yes |
Detection Overrides via AuditorPolicy.detection
Detection settings are declared as keyword-only parameters on @claims-decorated methods and overridden per-policy via the AuditorPolicy.detection section. The lucid_context.detection_overrides field in the /claims request provides the effective overrides resolved from the policy. Auditors receive the effective parameter values transparently -- no manual lookup required.
Enforcement Modes
Each field in AuditorPolicy.detection can carry an enforcement mode that constrains how overrides are applied at the policy scope:
| Mode | Behavior | Valid For |
|---|---|---|
unlocked |
No constraint (default) | All types |
exact |
Override must use the specified value | All types |
floor |
Override value must be >= the specified value | Numeric |
ceiling |
Override value must be <= the specified value | Numeric |
superset |
Override must include all specified items | Array |
Enforcement is validated by the Verifier API when policy overrides are saved. Individual auditors do not need to implement enforcement validation -- they receive the already-resolved effective configuration.
Presets
Presets are policy templates that bundle both detection overrides and Cedar response rules for common risk profiles (Starter, Balanced, Strict). Presets are applied through the Verifier API and expressed as AuditorPolicy documents with both detection_overrides and Cedar rules. See the Auditor Development Guide for implementation details.
Versioning
Schema Versioning
Claim and Evidence schemas use semantic versioning: - Major: Breaking changes (e.g., v1.x /audit -> v2.x /claims) - Minor: New optional fields added - Patch: Documentation/description changes
Backward Compatibility
The Gateway maintains compatibility with: - Current schema version (v2.x Claims model) - Previous major version deprecated (v1.x /audit endpoint removed)
Example Implementations
Minimal Python ClaimsAuditor
from lucid_auditor_sdk import ClaimsAuditor, claims, serve, Phase
from lucid_schemas import Claim
class ToxicityAuditor(ClaimsAuditor):
def __init__(self):
super().__init__("toxicity-auditor", "1.0.0")
self.model = load_toxicity_model()
@claims(phase=Phase.REQUEST)
def measure_toxicity(self, request: dict) -> list[Claim]:
score = self.model.analyze(request.get("prompt", ""))
return [
Claim(name="toxic_content", type="score_normalized", value=score),
]
# Deploy as HTTP service
serve(ToxicityAuditor(), port=8080)
Minimal Non-Python Auditor
Any language can implement the interface. The required endpoints are:
- GET /health -- return {"status": "healthy"}
- POST /claims -- accept data, return claims array
- GET /vocabulary -- return claim name/type declarations
Changelog
v2.1.0 (February 2026)
- Added enforcement modes (floor, ceiling, exact, superset, unlocked) for
AuditorPolicy.detection - Added preset support as policy templates (Starter, Balanced, Strict tiers)
- Detection settings declared via
@claimsdecorator keyword-only parameters - Documented detection overrides and enforcement in interface spec
v2.0.0 (February 2026)
- Replaced
/auditwith/claimsendpoint - Removed decision responses (blocked, proceed, deny)
- Added
/vocabularyendpoint for claim discovery - Removed chain forwarding protocol (Gateway handles orchestration)
- Evidence creation moved to Gateway exclusively
- Cedar policy evaluation replaces per-auditor enforcement
v1.0.0 (January 2026)
- Initial specification
- Standard measurement types
- Chain forwarding protocol
- Registration protocol