Lucid SDK Reference
This section provides the API reference for the Lucid SDK and the underlying schemas used for attestation.
AuditorApp (Recommended)
The high-level AuditorApp class provides the simplest way to create auditors with minimal boilerplate.
lucid_sdk.app.AuditorApp
Simplified high-level class for building auditors with minimal boilerplate.
AuditorApp combines configuration, FastAPI app creation, and auditor registration into a single cohesive class. It eliminates the repetitive boilerplate that appears in every auditor.
Features: - Automatic configuration from environment variables - Built-in /audit endpoint with phase detection and chaining - Decorator-based handler registration - Automatic evidence submission to verifier - Health/ready endpoints included
Attributes:
| Name | Type | Description |
|---|---|---|
name |
Display name for the auditor. |
|
auditor_id |
Unique identifier (defaults to env LUCID_AUDITOR_ID). |
|
config |
Configuration instance. |
|
app |
The FastAPI application. |
|
logger |
Structured logger. |
|
http_factory |
HTTPClientFactory
|
HTTP client factory with retry logic. |
Example
from lucid_sdk import AuditorApp, Proceed, Deny, Warn
Create app with minimal config
app = AuditorApp("pii-compliance")
Or with explicit config
app = AuditorApp( "pii-compliance", port=8096, auditor_id="lucid-pii-compliance-auditor", )
@app.on_request def check_input(data, config=None, lucid_context=None): pii_found = detect_pii(data.get("content", "")) if pii_found: return Deny("PII detected", entities=pii_found) return Proceed(data={"pii_checked": True})
@app.on_response def check_output(data, request=None, lucid_context=None): # Check response for PII leakage return Proceed()
if name == "main": app.run()
Source code in packages/lucid-sdk/lucid_sdk/app.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 | |
__init__(name, *, auditor_id=None, port=None, config_class=None, on_startup=None, on_shutdown=None, chain_failure_status='deny', **config_overrides)
Initialize the AuditorApp.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Display name for the auditor (e.g., "PII Compliance Auditor"). |
required |
auditor_id
|
Optional[str]
|
Unique ID. Defaults to LUCID_AUDITOR_ID env var or name-based ID. |
None
|
port
|
Optional[int]
|
Server port. Defaults to PORT env var or 8090. |
None
|
config_class
|
Optional[type]
|
Optional custom config class extending BaseAuditorConfig. |
None
|
on_startup
|
Optional[Callable]
|
Optional async callback for startup. |
None
|
on_shutdown
|
Optional[Callable]
|
Optional async callback for shutdown. |
None
|
chain_failure_status
|
str
|
Status to return on chain failure ("deny" or "warn"). |
'deny'
|
**config_overrides
|
Any
|
Additional config values to set. |
{}
|
Source code in packages/lucid-sdk/lucid_sdk/app.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 | |
add_config_endpoint(include_sensitive=False)
Add a /config endpoint to expose current configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_sensitive
|
bool
|
Whether to include potentially sensitive values. |
False
|
Example
app.add_config_endpoint()
GET /config returns
Source code in packages/lucid-sdk/lucid_sdk/app.py
352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 | |
get(path, **kwargs)
Add a GET route to the FastAPI app.
Source code in packages/lucid-sdk/lucid_sdk/app.py
411 412 413 | |
on_artifact(func)
Register a handler for deployment artifacts (Phase 1: Build).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
HandlerT
|
Handler function that receives artifact data. |
required |
Returns:
| Type | Description |
|---|---|
HandlerT
|
The decorated function unchanged. |
Example
@app.on_artifact def check_model(data, config=None, lucid_context=None): if not is_safetensors(data.get("model_path")): return Deny("Only safetensors format allowed") return Proceed()
Source code in packages/lucid-sdk/lucid_sdk/app.py
275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 | |
on_execution(func)
Register a handler for runtime execution (Phase 3: Execution).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
HandlerT
|
Handler function that receives execution context. |
required |
Returns:
| Type | Description |
|---|---|
HandlerT
|
The decorated function unchanged. |
Example
@app.on_execution def monitor_execution(data, config=None, lucid_context=None): if data.get("step_count", 0) > 100: return Deny("Execution loop limit exceeded") return Proceed()
Source code in packages/lucid-sdk/lucid_sdk/app.py
313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 | |
on_request(func)
Register a handler for incoming requests (Phase 2: Input).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
HandlerT
|
Handler function that receives request data. |
required |
Returns:
| Type | Description |
|---|---|
HandlerT
|
The decorated function unchanged. |
Example
@app.on_request def check_input(data, config=None, lucid_context=None): messages = data.get("messages", []) for msg in messages: if contains_pii(msg.get("content", "")): return Deny("PII detected in input") return Proceed()
Source code in packages/lucid-sdk/lucid_sdk/app.py
293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 | |
on_response(func)
Register a handler for model responses (Phase 4: Output).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
HandlerT
|
Handler function that receives response data. |
required |
Returns:
| Type | Description |
|---|---|
HandlerT
|
The decorated function unchanged. |
Example
@app.on_response def check_output(data, request=None, lucid_context=None): content = data.get("content", "") if is_toxic(content): return Deny("Toxic content in response") return Proceed()
Source code in packages/lucid-sdk/lucid_sdk/app.py
331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 | |
post(path, **kwargs)
Add a POST route to the FastAPI app.
Source code in packages/lucid-sdk/lucid_sdk/app.py
415 416 417 | |
route(path, **kwargs)
Add a custom route to the FastAPI app.
This is a passthrough to app.api_route for adding custom endpoints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
URL path for the route. |
required |
**kwargs
|
Any
|
Additional arguments for FastAPI route decorator. |
{}
|
Returns:
| Type | Description |
|---|---|
Callable
|
Decorator function. |
Example
@app.route("/status", methods=["GET"]) async def custom_status(): return {"custom": "status"}
Source code in packages/lucid-sdk/lucid_sdk/app.py
392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 | |
run(host='0.0.0.0', port=None)
Run the auditor application.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind to. Defaults to 0.0.0.0. |
'0.0.0.0'
|
port
|
Optional[int]
|
Port to bind to. Defaults to config.port. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/app.py
421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 | |
lucid_sdk.app.create_app(name, **kwargs)
Factory function to create an AuditorApp.
This is a convenience function that creates an AuditorApp instance. Equivalent to AuditorApp(name, **kwargs).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Display name for the auditor. |
required |
**kwargs
|
Any
|
Additional arguments passed to AuditorApp. |
{}
|
Returns:
| Type | Description |
|---|---|
AuditorApp
|
AuditorApp instance. |
Example
from lucid_sdk import create_app, Proceed, Deny
app = create_app("secrets-detector", port=8095)
@app.on_request def check_secrets(data, config=None): return Proceed()
if name == "main": app.run()
Source code in packages/lucid-sdk/lucid_sdk/app.py
440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 | |
Auditor (Builder Pattern)
lucid_sdk.auditor.Auditor
Bases: ABC
Abstract base class for all Lucid Auditors.
Auditors are the primary units of safety enforcement in the Lucid platform. They execute within Trusted Execution Environments (TEEs) and produce cryptographically signed evidence of their findings.
Attributes:
| Name | Type | Description |
|---|---|---|
auditor_id |
str
|
Unique identifier for the auditor. |
version |
str
|
Protocol version string. |
tee |
LucidClient
|
Client for hardware attestation and secret management. |
verifier_url |
Optional[str]
|
Endpoint for the Verifier service to send evidence to. |
config |
AuditorConfig
|
Unique configuration patterns for this auditor. |
API Contract
Subclasses must implement: - check_request(request, lucid_context) -> AuditResult - check_execution(context, lucid_context) -> AuditResult - check_response(response, request, lucid_context) -> AuditResult
Request Parameter (RequestPayload): - messages: List[Dict] - Conversation messages with role and content - model: str - Model identifier being called - nonce: str (optional) - Anti-replay token for session binding - metadata: Dict (optional) - Additional request metadata
Context Parameter (ExecutionContext, for check_execution): - tool_calls: List[Dict] - Tool invocations with name and arguments - intermediate_outputs: List[str] - Model intermediate reasoning steps - resource_usage: Dict - CPU/memory/token consumption metrics
Response Parameter (ResponsePayload): - content: str - Generated text response - tool_calls: List[Dict] (optional) - Tool calls in the response - finish_reason: str - Why generation stopped (stop, length, tool_calls) - usage: Dict - Token usage statistics
lucid_context Structure (LucidContext): Enables dataflow between auditors in a chain. Each auditor AuditResult.data is stored under its auditor_id key::
{
"pii-auditor": {
"contains_pii": False,
"confidence": 0.95,
"detected_entities": []
},
"injection-auditor": {
"is_injection": False,
"score": 0.1
}
}
Example::
class MyAuditor(Auditor):
def check_request(self, request, lucid_context=None):
# Access upstream auditor results
if lucid_context and "pii-auditor" in lucid_context:
if lucid_context["pii-auditor"].get("contains_pii"):
return Deny("PII detected by upstream auditor")
# Pass data to downstream auditors via AuditResult.data
return Proceed(data={"processed": True, "score": 0.8})
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 | |
check_execution(context, lucid_context=None)
abstractmethod
Monitor the model execution process.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
ExecutionContext
|
Execution context containing telemetry indicators. |
required |
lucid_context
|
LucidContext
|
Optional context from previous auditors (dataflow). |
None
|
Returns:
| Type | Description |
|---|---|
AuditResult
|
AuditResult containing the decision. |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
706 707 708 709 710 711 712 713 714 715 716 717 | |
check_request(request, lucid_context=None)
abstractmethod
Evaluate an incoming model request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
RequestPayload
|
The request payload to audit (dict or Pydantic model). |
required |
lucid_context
|
LucidContext
|
Optional context from previous auditors (dataflow). |
None
|
Returns:
| Type | Description |
|---|---|
AuditResult
|
AuditResult containing the decision. |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
693 694 695 696 697 698 699 700 701 702 703 704 | |
check_response(response, request=None, lucid_context=None)
abstractmethod
Evaluate a model generated response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
response
|
ResponsePayload
|
The response payload to audit (dict or Pydantic model). |
required |
request
|
Optional[RequestPayload]
|
Optional original request for context. |
None
|
lucid_context
|
LucidContext
|
Optional context from previous auditors (dataflow). |
None
|
Returns:
| Type | Description |
|---|---|
AuditResult
|
AuditResult containing the decision. |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
719 720 721 722 723 724 725 726 727 728 729 730 731 | |
create_claim(phase, result, nonce=None)
Create an unsigned Claim for the given audit result.
Claims are unsigned assertions that can be bundled into Evidence. Use create_evidence() to bundle Claims and sign them together.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
phase
|
str
|
The lifecycle phase to record. |
required |
result
|
AuditResult
|
The AuditResult to transform into a Claim. |
required |
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
ClaimDict
|
Dictionary representation of a Claim (ClaimDict). |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 | |
create_evidence(phase, results, nonce=None, evidence_id=None)
Create and sign an Evidence bundle for the given audit results.
Evidence bundles one or more Claims and signs them together. This is the RATS-compliant (RFC 9334) approach, replacing per-Measurement signatures with per-Evidence signatures.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
phase
|
str
|
The lifecycle phase (request, response, artifact, execution). |
required |
results
|
Union[AuditResult, List[AuditResult]]
|
Single AuditResult or list of AuditResults to bundle. |
required |
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
evidence_id
|
Optional[str]
|
Optional custom evidence ID. If not provided, auto-generated. |
None
|
Returns:
| Type | Description |
|---|---|
EvidenceDict
|
Dictionary representation of signed Evidence (EvidenceDict). |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 | |
emit_evidence(phase, result, request=None)
Standard method to create, sign, and send evidence to the Verifier.
This method wraps the audit result into an Evidence bundle (RFC 9334), calls the hardware Attestation Agent to sign it, and pushes it to the Verifier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
phase
|
str
|
The lifecycle phase (artifact, request, execution, response). |
required |
result
|
AuditResult
|
The result of the audit. |
required |
request
|
Optional[RequestPayload]
|
Optional request object to extract nonces/metadata. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 | |
lucid_sdk.auditor.AuditResult
Bases: BaseModel
The outcome of an auditor's evaluation.
Encapsulates the decision made by the auditor, along with any relevant reasons, modifications to the data, and additional metadata for the Verifier or Observer.
Attributes:
| Name | Type | Description |
|---|---|---|
decision |
AuditDecision
|
The final decision (PROCEED, DENY, REDACT, WARN). |
reason |
Optional[str]
|
Human-readable explanation for the decision. |
modifications |
Optional[Dict[str, object]]
|
If decision is REDACT, contains the specific key-value updates to be applied to the request. |
metadata |
MetadataDict
|
Arbitrary key-value pairs providing extra context for the audit (e.g., specific rules triggered). |
data |
Dict[str, Any]
|
Results to be passed to the NEXT auditor in the chain (Dataflow). Accepts arbitrary key-value pairs for flexible inter-auditor communication. |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 | |
Helpers
lucid_sdk.auditor.Proceed(reason=None, data=None, **metadata)
Helper to create a PROCEED result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reason
|
Optional[str]
|
Optional explanation. |
None
|
data
|
Optional[AuditorDataDict]
|
Optional results to pass to next auditor (dataflow). |
None
|
**metadata
|
MetadataValue
|
Extra context to include (e.g., safety_score=1.0). |
{}
|
Returns:
| Type | Description |
|---|---|
AuditResult
|
AuditResult with PROCEED decision. |
Example
return Proceed(safety_score=0.95, data={"processed": True})
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 | |
lucid_sdk.auditor.Deny(reason, data=None, **metadata)
Helper to create a DENY result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reason
|
str
|
Required explanation for the denial. |
required |
data
|
Optional[AuditorDataDict]
|
Optional results to pass to next auditor (dataflow). |
None
|
**metadata
|
MetadataValue
|
Extra context to include. |
{}
|
Returns:
| Type | Description |
|---|---|
AuditResult
|
AuditResult with DENY decision. |
Example
return Deny("Prompt injection detected", injection_score=0.95)
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 | |
lucid_sdk.auditor.Redact(modifications, reason=None, data=None, **metadata)
Helper to create a REDACT result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
modifications
|
Dict[str, object]
|
Dictionary of keys and their new, redacted values. |
required |
reason
|
Optional[str]
|
Optional explanation. |
None
|
data
|
Optional[AuditorDataDict]
|
Optional results to pass to next auditor (dataflow). |
None
|
**metadata
|
MetadataValue
|
Extra context to include. |
{}
|
Returns:
| Type | Description |
|---|---|
AuditResult
|
AuditResult with REDACT decision. |
Example
return Redact({"content": "[REDACTED]"}, reason="PII detected")
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 | |
lucid_sdk.auditor.Warn(reason, data=None, **metadata)
Helper to create a WARN result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
reason
|
str
|
Required explanation for the warning. |
required |
data
|
Optional[AuditorDataDict]
|
Optional results to pass to next auditor (dataflow). |
None
|
**metadata
|
MetadataValue
|
Extra context to include. |
{}
|
Returns:
| Type | Description |
|---|---|
AuditResult
|
AuditResult with WARN decision. |
Example
return Warn("Elevated toxicity score", toxicity_score=0.6)
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 | |
ClaimsAuditor (Policy-Driven Pattern)
lucid_sdk.auditor.ClaimsAuditor
Bases: ABC
Base class for policy-driven auditors that produce claims.
In the policy-driven architecture, ClaimsAuditor subclasses only produce claims (observations) using @claims decorated methods. The PolicyEngine evaluates claims against policy rules to make decisions.
This separates concerns: - Auditors: Produce claims (measurements, observations) - PolicyEngine: Makes decisions based on policy rules
Benefits: - Policy changes take effect without redeploying auditors - Claims can be reused across different policies - Clear separation of measurement vs decision logic
Attributes:
| Name | Type | Description |
|---|---|---|
auditor_id |
str
|
Unique identifier for this auditor. |
version |
str
|
Version string for this auditor. |
Example
class ToxicityAuditor(ClaimsAuditor): def init(self): super().init("toxicity-auditor", "1.0.0") self.model = load_toxicity_model()
@claims(phase=Phase.REQUEST)
def measure_toxicity(self, request: dict) -> list[Claim]:
score = self.model.analyze(request.get("prompt", ""))
return [Claim(
name="toxicity.score",
type=MeasurementType.score_normalized,
value=score,
confidence=0.95,
timestamp=datetime.now(timezone.utc),
)]
@claims(phase=Phase.RESPONSE)
def check_response_toxicity(self, response: dict) -> list[Claim]:
content = response.get("content", "")
score = self.model.analyze(content)
return [Claim(
name="response.toxicity.score",
type=MeasurementType.score_normalized,
value=score,
confidence=0.95,
timestamp=datetime.now(timezone.utc),
)]
Note
Use with AuditorRuntime to orchestrate claim collection and policy enforcement. See AuditorRuntime for the complete workflow.
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 | |
__init__(auditor_id, version='1.0.0')
Initialize the ClaimsAuditor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
auditor_id
|
str
|
Unique identifier for this auditor. |
required |
version
|
str
|
Version string for this auditor implementation. |
'1.0.0'
|
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
938 939 940 941 942 943 944 945 946 | |
get_claims_for_phase(phase, *args, **kwargs)
Collect all claims from @claims methods for a given phase.
This method discovers all methods decorated with @claims for the specified phase and invokes them to collect claims.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
phase
|
Phase
|
The lifecycle phase to collect claims for. |
required |
*args
|
Any
|
Positional arguments to pass to claim methods. |
()
|
**kwargs
|
Any
|
Keyword arguments to pass to claim methods. |
{}
|
Returns:
| Type | Description |
|---|---|
List[Claim]
|
List of all claims produced by methods for this phase. |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 | |
lucid_sdk.auditor.claims(phase, name=None)
Decorator that marks a method as producing claims.
In the policy-driven architecture, auditors only produce claims (observations), and the PolicyEngine decides the action (deny/proceed/warn/redact).
This decorator: 1. Marks the method as a claim producer 2. Records the lifecycle phase (request, response, etc.) 3. Enables AuditorRuntime to discover and invoke claim methods
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
phase
|
Phase
|
The lifecycle phase when this method should be invoked. |
required |
name
|
Optional[str]
|
Optional name for the claims produced. Defaults to method name. |
None
|
Returns:
| Type | Description |
|---|---|
Callable[[Callable[..., List[Any]]], Callable[..., List[Any]]]
|
Decorated function that produces list[Claim]. |
Example
class ToxicityAuditor(ClaimsAuditor): @claims(phase=Phase.REQUEST) def measure_toxicity(self, request: dict) -> list[Claim]: score = self.model.analyze(request["prompt"]) return [Claim( name="toxicity.score", value=score, confidence=0.95, type=MeasurementType.score_normalized, timestamp=datetime.now(timezone.utc) )]
Note
- Decorated methods should return list[Claim], not AuditResult
- The PolicyEngine will evaluate claims against policy rules
- Methods are discovered via get_claims_methods() on ClaimsAuditor
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | |
lucid_sdk.auditor.Phase
Bases: str, Enum
Lifecycle phase for claim production.
Indicates when in the request lifecycle a claim is produced.
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
18 19 20 21 22 23 24 25 26 | |
lucid_sdk.auditor.AuditorRuntime
Orchestrates claim collection and policy enforcement.
AuditorRuntime bridges ClaimsAuditor (which produces claims) with PolicyEngine (which makes decisions). It implements the full policy-driven architecture workflow:
- Invoke @claims methods on the auditor → collect claims
- Bundle claims into Evidence
- Pass Evidence to PolicyEngine for appraisal
- Return AuditRuntimeResult with decision and provenance
This separation enables: - Policy updates without auditor redeployment - Clear audit trail with policy version - Standardized claim collection across auditors
Attributes:
| Name | Type | Description |
|---|---|---|
auditor |
The ClaimsAuditor instance to collect claims from. |
|
policy_engine |
The PolicyEngine (or DynamicPolicyEngine) for decisions. |
|
tee |
LucidClient for signing evidence. |
Example
from lucid_sdk import ClaimsAuditor, AuditorRuntime from lucid_sdk.policy_engine import DynamicPolicyEngine from lucid_sdk.policy_source import VerifierPolicySource
Create auditor
auditor = ToxicityAuditor()
Create policy engine with dynamic refresh
source = VerifierPolicySource("https://verifier.example.com/v1") engine = DynamicPolicyEngine(source, "toxicity-auditor")
Create runtime
runtime = AuditorRuntime(auditor, engine)
Evaluate a request
result = runtime.evaluate_request(request_data) if result.decision == AuditDecision.DENY: ... return {"error": result.reason}
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 | |
__init__(auditor, policy_engine, verifier_url=None)
Initialize the AuditorRuntime.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
auditor
|
ClaimsAuditor
|
The ClaimsAuditor to collect claims from. |
required |
policy_engine
|
Any
|
PolicyEngine for decision making. |
required |
verifier_url
|
Optional[str]
|
Optional Verifier endpoint for evidence submission. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 | |
evaluate_request(request, lucid_context=None)
Evaluate a request through the policy-driven pipeline.
- Collects claims from @claims(phase=Phase.REQUEST) methods
- Bundles claims into Evidence
- Appraises Evidence against policy
- Returns result with decision and provenance
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
RequestPayload
|
The request payload to evaluate. |
required |
lucid_context
|
LucidContext
|
Optional context from previous auditors. |
None
|
Returns:
| Type | Description |
|---|---|
AuditRuntimeResult
|
AuditRuntimeResult with decision, evidence, and policy info. |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 | |
evaluate_response(response, request=None, lucid_context=None)
Evaluate a response through the policy-driven pipeline.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
response
|
ResponsePayload
|
The response payload to evaluate. |
required |
request
|
Optional[RequestPayload]
|
Optional original request for context. |
None
|
lucid_context
|
LucidContext
|
Optional context from previous auditors. |
None
|
Returns:
| Type | Description |
|---|---|
AuditRuntimeResult
|
AuditRuntimeResult with decision, evidence, and policy info. |
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 | |
lucid_sdk.auditor.AuditRuntimeResult
Bases: BaseModel
Result from AuditorRuntime evaluation.
Contains the decision, evidence, and policy version used.
Source code in packages/lucid-sdk/lucid_sdk/auditor.py
987 988 989 990 991 992 993 994 995 996 997 998 | |
Policy Engine
lucid_sdk.policy_engine.PolicyEngine
Engine for evaluating claims against auditor policies.
The PolicyEngine is the main class for policy evaluation. It takes an AuditorPolicy and provides methods to validate claims, evaluate rules, and determine the final audit decision.
The evaluation process
- Validate claims against required/optional claim specifications
- Evaluate each policy rule's LPL condition
- Determine final decision based on rule outcomes and enforcement mode
Attributes:
| Name | Type | Description |
|---|---|---|
policy |
The AuditorPolicy being enforced. |
|
parser |
The LPL expression parser for condition evaluation. |
|
last_results |
List[RuleResult]
|
Results from the most recent rule evaluation. |
Example
policy = load_policy("my_policy.yaml") engine = PolicyEngine(policy)
Full evaluation
result = engine.evaluate(claims) print(f"Decision: {result.decision}")
Or step-by-step
validation = engine.validate_claims(claims) if validation.valid: ... decision = engine.enforce(claims) ... reason = engine.get_reason()
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 | |
__init__(policy)
Initialize the PolicyEngine with an AuditorPolicy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
AuditorPolicy
|
The AuditorPolicy to enforce. |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If policy is None. |
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 | |
appraise_attestation_result(attestation_result)
Appraise all Evidence in an AttestationResult (RFC 9334 compliant).
This is the primary RATS-compliant method for checking if Claims in AttestationResults are aligned with a policy. It:
- Iterates through all Evidence bundles in the AttestationResult
- Applies the Appraisal Policy to each Evidence's Claims
- Sets the trust_tier on each Evidence
- Updates the overall deployment_authorized status
Per RFC 9334, the Verifier processes Evidence and produces Attestation Results that Relying Parties can use for authorization decisions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
attestation_result
|
AttestationResult
|
The AttestationResult to appraise. |
required |
Returns:
| Type | Description |
|---|---|
AttestationResult
|
The updated AttestationResult with: - Each Evidence's trust_tier set - deployment_authorized updated based on appraisal - authorization_reason explaining the decision |
Example
result = engine.appraise_attestation_result(attestation_result) if result.deployment_authorized: ... print("All evidence appraised successfully") else: ... print(f"Appraisal failed: {result.authorization_reason}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 | |
appraise_evidence(evidence)
Appraise Evidence and set its trust_tier (RFC 9334 compliant).
This is the RATS-compliant way to evaluate Evidence. The Verifier applies the Appraisal Policy for Evidence to assess trustworthiness and sets the trust_tier field on the Evidence.
Per RFC 9334
- "affirming": Claims meet all policy requirements
- "warning": Claims have minor issues but are acceptable
- "contraindicated": Claims violate critical policy rules
- "none": Unable to determine trustworthiness
Also populates the EAR-compliant appraisal_record with per-claim appraisal details for visualization and audit.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
evidence
|
Evidence
|
The Evidence object to appraise. |
required |
Returns:
| Type | Description |
|---|---|
Evidence
|
The same Evidence object with trust_tier and appraisal_record set. |
Example
appraised = engine.appraise_evidence(evidence) print(f"Trust tier: {appraised.trust_tier}") for claim_result in appraised.appraisal_record['claim_appraisals']: ... print(f" {claim_result['claim_name']}: {claim_result['status']}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 | |
enforce(claims)
Enforce the policy and return the audit decision.
Evaluates all rules and determines the final decision based on
- Which rules triggered (condition NOT met)
- The actions specified by triggered rules
- The policy's enforcement mode
Decision logic
- If any rule with action="deny" triggers:
- EnforcementMode.BLOCK -> DENY
- EnforcementMode.WARN -> WARN
- EnforcementMode.LOG -> PROCEED (silent logging)
- EnforcementMode.AUDIT -> WARN (requires review)
- If any rule with action="warn" triggers -> WARN
- If any rule with action="redact" triggers -> REDACT
- Otherwise -> PROCEED
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to evaluate. |
required |
Returns:
| Type | Description |
|---|---|
AuditDecision
|
The AuditDecision for this request. |
Example
decision = engine.enforce(claims) if decision == AuditDecision.DENY: ... return {"error": engine.get_reason()}
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 | |
evaluate(claims)
Perform complete policy evaluation.
This is the main entry point for policy evaluation. It: 1. Validates claims against requirements 2. Evaluates all policy rules 3. Determines the final decision 4. Returns a comprehensive result
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to evaluate. |
required |
Returns:
| Type | Description |
|---|---|
PolicyEvaluationResult
|
PolicyEvaluationResult with decision, validation, and rule results. |
Example
result = engine.evaluate(claims) print(f"Decision: {result.decision}") print(f"Validation: {'PASS' if result.validation.valid else 'FAIL'}") for rule in result.rule_results: ... if rule.triggered: ... print(f" - {rule.rule_id}: {rule.message}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 | |
evaluate_rules(claims)
Evaluate all policy rules against the provided claims.
Each rule's LPL condition is evaluated. A rule is "triggered" when its condition evaluates to False (meaning the condition for proceeding is NOT met).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to evaluate rules against. |
required |
Returns:
| Type | Description |
|---|---|
List[RuleResult]
|
List of RuleResult objects, one for each rule. |
Note
The results are also stored in self.last_results for later retrieval via get_reason().
Example
results = engine.evaluate_rules(claims) for result in results: ... if result.triggered: ... print(f"Rule {result.rule_id} triggered: {result.message}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 | |
extract_all_claims(attestation_result)
Extract all Claims from all Evidence in an AttestationResult.
Flattens the nested structure to get a single list of all Claims for policy evaluation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
attestation_result
|
AttestationResult
|
The AttestationResult to extract claims from. |
required |
Returns:
| Type | Description |
|---|---|
List[Claim]
|
List of all Claims from all Evidence bundles. |
Example
claims = engine.extract_all_claims(attestation_result) print(f"Found {len(claims)} total claims")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 | |
find_claim(claims, name)
Find a claim by name in the claims list.
Searches through the provided claims for one matching the given name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to search. |
required |
name
|
str
|
The claim name to find. |
required |
Returns:
| Type | Description |
|---|---|
Optional[Claim]
|
The matching Claim if found, None otherwise. |
Example
claim = engine.find_claim(claims, "location.country") if claim: ... print(f"Found claim with value: {claim.value}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 | |
get_reason()
Get a human-readable reason for the last evaluation result.
Builds a summary of triggered rules and their messages from the most recent call to evaluate_rules() or enforce().
Returns:
| Type | Description |
|---|---|
str
|
A string describing the triggered rules, or "No rules triggered" |
str
|
if the evaluation passed. |
Example
decision = engine.enforce(claims) if decision == AuditDecision.DENY: ... print(f"Denied: {engine.get_reason()}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 | |
validate_claims(claims)
Validate claims against the policy's claim requirements.
Checks that
- All required claims are present
- Claims meet minimum confidence thresholds
- Claim values match their value_schema (if specified)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to validate. |
required |
Returns:
| Type | Description |
|---|---|
PolicyResult
|
PolicyResult indicating whether validation passed and any errors. |
Example
result = engine.validate_claims(claims) if not result.valid: ... print(f"Validation failed: {result.errors}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 | |
validate_schema(value, schema)
Validate a value against a JSON Schema.
Uses the jsonschema library if available. If jsonschema is not installed, this method returns True (validation skipped).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
Any
|
The value to validate. |
required |
schema
|
Dict[str, Any]
|
The JSON Schema to validate against. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if the value matches the schema (or jsonschema unavailable), |
bool
|
False if validation fails. |
Note
The jsonschema library is an optional dependency. If not installed, schema validation is skipped with a warning logged.
Example
schema = {"type": "object", "properties": {"country": {"type": "string"}}} engine.validate_schema({"country": "IN"}, schema) True
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 | |
lucid_sdk.policy_engine.DynamicPolicyEngine
PolicyEngine with dynamic policy refresh from a PolicySource.
This engine wraps the base PolicyEngine and adds: - Dynamic policy fetching from a PolicySource - Caching with configurable refresh interval - Graceful fallback on fetch failures - Fail-closed mode for safety
Attributes:
| Name | Type | Description |
|---|---|---|
source |
PolicySource
|
The PolicySource to fetch policies from. |
auditor_id |
The auditor ID to fetch policies for. |
|
refresh_interval |
Seconds between policy refreshes (default: 60). |
|
max_stale_time |
Max seconds to use stale policy on failure (default: 300). |
|
fail_closed |
If True, deny on policy unavailable (default: True). |
Example
from lucid_sdk.policy_source import VerifierPolicySource source = VerifierPolicySource("https://verifier.example.com/v1") engine = DynamicPolicyEngine( ... source=source, ... auditor_id="my-auditor", ... refresh_interval=60 ... ) result = engine.evaluate(claims) print(f"Policy version: {engine.policy_version}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 | |
config
property
Get the policy config for use in LPL expressions.
Returns:
| Type | Description |
|---|---|
Optional[Any]
|
The PolicyConfig from the current policy, or None. |
policy
property
Get the current cached policy, refreshing if needed.
policy_version
property
Get the current policy version string.
__init__(source, auditor_id, refresh_interval=60.0, max_stale_time=300.0, fail_closed=True)
Initialize the DynamicPolicyEngine.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source
|
'PolicySource'
|
The PolicySource to fetch policies from. |
required |
auditor_id
|
str
|
The auditor ID to fetch policies for. |
required |
refresh_interval
|
float
|
Seconds between policy refreshes. |
60.0
|
max_stale_time
|
float
|
Max seconds to use stale policy on failure. |
300.0
|
fail_closed
|
bool
|
If True, deny when no policy available. |
True
|
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 | |
appraise_evidence(evidence)
Appraise Evidence and set its trust_tier.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
evidence
|
Evidence
|
The Evidence object to appraise. |
required |
Returns:
| Type | Description |
|---|---|
Evidence
|
The Evidence with trust_tier set. |
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 | |
enforce(claims)
Enforce the policy and return the audit decision.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to evaluate. |
required |
Returns:
| Type | Description |
|---|---|
AuditDecision
|
AuditDecision (DENY if no policy and fail_closed). |
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 | |
evaluate(claims)
Evaluate claims against the current policy.
If no policy is available and fail_closed is True, returns DENY.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to evaluate. |
required |
Returns:
| Type | Description |
|---|---|
PolicyEvaluationResult
|
PolicyEvaluationResult with decision and details. |
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 | |
evaluate_rules(claims)
Evaluate all policy rules against claims.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
claims
|
List[Claim]
|
List of claims to evaluate. |
required |
Returns:
| Type | Description |
|---|---|
List[RuleResult]
|
List of RuleResult objects, or empty list if no policy. |
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 | |
lucid_sdk.policy_engine.PolicyEvaluationResult
Bases: BaseModel
Complete result of policy evaluation.
This model provides a comprehensive view of the policy evaluation, including the final decision, validation results, and individual rule evaluation results.
Attributes:
| Name | Type | Description |
|---|---|---|
decision |
AuditDecision
|
The final audit decision (PROCEED, DENY, REDACT, WARN). |
validation |
PolicyResult
|
Result of claim validation against requirements. |
rule_results |
List[RuleResult]
|
List of individual rule evaluation results. |
policy_id |
str
|
ID of the policy that was evaluated. |
policy_version |
str
|
Version of the policy that was evaluated. |
Example
result = engine.evaluate(claims) if result.decision == AuditDecision.DENY: ... triggered = [r for r in result.rule_results if r.triggered] ... for rule in triggered: ... print(f"Rule {rule.rule_id}: {rule.message}")
Source code in packages/lucid-sdk/lucid_sdk/policy_engine.py
246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 | |
Policy Sources
lucid_sdk.policy_source.PolicySource
Bases: ABC
Abstract base class for policy sources.
PolicySource provides an interface for fetching policies from various backends (API, file, etc.). Implementations should handle caching and error recovery internally or delegate to PolicyEngine.
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
fetch(auditor_id)
abstractmethod
Fetch the current policy for an auditor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
auditor_id
|
str
|
The unique identifier of the auditor. |
required |
Returns:
| Type | Description |
|---|---|
AuditorPolicy
|
A tuple of (AuditorPolicy, version_string). |
str
|
The version string should change whenever the policy changes. |
Raises:
| Type | Description |
|---|---|
PolicySourceError
|
If the policy cannot be fetched. |
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 | |
lucid_sdk.policy_source.VerifierPolicySource
Bases: PolicySource
Fetches policies from the Verifier API.
This source fetches policies from a Verifier service endpoint, enabling centralized policy management and dynamic updates.
Attributes:
| Name | Type | Description |
|---|---|---|
base_url |
Base URL of the Verifier API (e.g., "https://verifier.example.com/v1") |
|
timeout |
HTTP request timeout in seconds (default: 10) |
Example
source = VerifierPolicySource("https://verifier.example.com/v1") policy, version = source.fetch("toxicity-auditor")
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | |
__init__(base_url=None, timeout=10.0, api_key=None)
Initialize the VerifierPolicySource.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
Optional[str]
|
Base URL of the Verifier API. If not provided, reads from LUCID_VERIFIER_URL environment variable. |
None
|
timeout
|
float
|
HTTP request timeout in seconds. |
10.0
|
api_key
|
Optional[str]
|
Optional API key for authentication. If not provided, reads from LUCID_API_KEY environment variable. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | |
fetch(auditor_id)
Fetch policy from Verifier API.
Calls GET {base_url}/auditors/{auditor_id}/policy?public=true to retrieve the current policy for the specified auditor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
auditor_id
|
str
|
The unique identifier of the auditor. |
required |
Returns:
| Type | Description |
|---|---|
Tuple[AuditorPolicy, str]
|
A tuple of (AuditorPolicy, version_string). |
Raises:
| Type | Description |
|---|---|
PolicySourceError
|
If the request fails or policy is not found. |
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | |
lucid_sdk.policy_source.FilePolicySource
Bases: PolicySource
Loads policies from local YAML files.
This source loads policies from the local filesystem, useful for development, testing, or air-gapped environments.
Attributes:
| Name | Type | Description |
|---|---|---|
path |
Path to the YAML policy file. |
Example
source = FilePolicySource("/path/to/policy.yaml") policy, version = source.fetch("my-auditor")
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | |
__init__(path)
Initialize the FilePolicySource.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Path to the YAML policy file. |
required |
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
193 194 195 196 197 198 199 | |
fetch(auditor_id)
Load policy from YAML file.
The version is derived from the file's modification time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
auditor_id
|
str
|
The auditor ID (used for logging, not filtering). |
required |
Returns:
| Type | Description |
|---|---|
AuditorPolicy
|
A tuple of (AuditorPolicy, version_string). |
str
|
Version is the file's mtime as an ISO timestamp. |
Raises:
| Type | Description |
|---|---|
PolicySourceError
|
If the file cannot be read or parsed. |
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 | |
lucid_sdk.policy_source.PolicySourceError
Bases: LucidError
Exception raised when policy fetching fails.
Source code in packages/lucid-sdk/lucid_sdk/policy_source.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | |
Models & Schemas
lucid_schemas.claim.Claim
Bases: VersionedSchema
Individual assertion without signature (RFC 9334 Claim).
A Claim is the atomic unit of attestation data. It represents a single assertion made by an Attester (auditor) about some aspect of the system or data being audited.
Claims do NOT include signatures - they are bundled into Evidence containers which provide a single signature covering all claims. This is more efficient than signing each claim individually.
Source code in packages/lucid-schemas/lucid_schemas/claim.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | |
validate_value_constraints(v)
classmethod
Validate size and depth constraints for the value field.
Source code in packages/lucid-schemas/lucid_schemas/claim.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 | |
lucid_schemas.evidence.Evidence
Bases: VersionedSchema
Container of Claims from a single Attester (RFC 9334 Evidence).
Evidence bundles one or more Claims and provides a single cryptographic signature covering all of them. This is more efficient than signing each claim individually (as was done with Measurements).
The signature flow is: 1. Attester creates Claims (unsigned assertions) 2. Attester bundles Claims into Evidence 3. Attester signs the Evidence once (covering all Claims) 4. Verifier verifies one signature per Evidence
This replaces the per-Measurement signature approach.
Source code in packages/lucid-schemas/lucid_schemas/evidence.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | |
validate_signature_format(v)
classmethod
Validate that signature looks like base64-encoded data (standard or URL-safe).
Source code in packages/lucid-schemas/lucid_schemas/evidence.py
97 98 99 100 101 102 103 104 105 106 | |
lucid_schemas.attestation.AttestationResult
Bases: VersionedSchema
The final AI Passport issued by the Verifier (EAT-inspired).
Source code in packages/lucid-schemas/lucid_schemas/attestation.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | |
lucid_schemas.enums.AuditDecision
Bases: str, Enum
Decision an auditor can make about a request/response
Source code in packages/lucid-schemas/lucid_schemas/enums.py
61 62 63 64 65 66 | |
Policy Schemas
lucid_schemas.policy.AuditorPolicy
Bases: VersionedSchema
Complete policy definition for an auditor.
Defines what claims an auditor must produce, the rules for evaluating those claims, and how violations should be enforced.
Source code in packages/lucid-schemas/lucid_schemas/policy.py
318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 | |
validate_control_mappings(v, info)
classmethod
Validate that control mappings reference declared frameworks.
Source code in packages/lucid-schemas/lucid_schemas/policy.py
403 404 405 406 407 408 409 410 411 | |
lucid_schemas.policy.PolicyConfig
Bases: LucidBaseModel
Configuration values for policy evaluation.
PolicyConfig holds behavioral settings like thresholds, feature flags, and model versions that can be referenced in policy rule conditions. This replaces environment variables for behavioral settings, enabling dynamic policy updates without redeploying auditors.
Config values are accessed in LPL expressions via config.* syntax: condition: "claims['toxicity.score'].value < config.toxicity_threshold"
Example YAML
config: toxicity_threshold: 0.8 enable_pii_detection: true model_version: "v2" allowed_regions: - US - EU
Source code in packages/lucid-schemas/lucid_schemas/policy.py
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 | |
lucid_schemas.policy.PolicyRule
Bases: BasePolicyRule
A single policy rule with conditions and actions.
Rules use LPL (Lucid Policy Language) expressions to evaluate claims and determine the appropriate action.
Source code in packages/lucid-schemas/lucid_schemas/policy.py
305 306 307 308 309 310 311 312 313 314 315 | |
lucid_schemas.policy.EnforcementMode
Bases: str, Enum
Enforcement mode for policy violations.
Source code in packages/lucid-schemas/lucid_schemas/policy.py
177 178 179 180 181 182 183 | |
Optional Import Utilities
Graceful degradation for optional dependencies without try/except boilerplate.
lucid_sdk.imports.optional_import(module_name, *, fallback=None, min_version=None, package_name=None, warn_on_missing=True, submodules=None)
Import a module optionally, returning a fallback if not available.
This function attempts to import a module and returns it if successful. If the import fails (e.g., module not installed), it returns either: - The provided fallback - A MockModule that logs warnings on access
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
module_name
|
str
|
The name of the module to import (e.g., "presidio_analyzer"). |
required |
fallback
|
Optional[Union[Type[T], Callable[[], T], Any]]
|
Optional fallback to return if import fails. Can be: - A class to instantiate - A callable that returns the fallback - Any other value to return directly |
None
|
min_version
|
Optional[str]
|
Optional minimum version string (e.g., "1.0.0"). |
None
|
package_name
|
Optional[str]
|
Optional PyPI package name if different from module name. |
None
|
warn_on_missing
|
bool
|
Whether to log a warning when the module is missing. |
True
|
submodules
|
Optional[List[str]]
|
Optional list of submodule names to also import. |
None
|
Returns:
| Type | Description |
|---|---|
Any
|
The imported module, or the fallback/MockModule if import fails. |
Examples:
Basic usage
presidio = optional_import("presidio_analyzer") if presidio: analyzer = presidio.AnalyzerEngine()
With a fallback class
class MockDetector: def detect(self, text): return []
detector_lib = optional_import("detect_secrets", fallback=MockDetector) detector = detector_lib.Detector() if detector_lib else MockDetector()
With version requirement
torch = optional_import("torch", min_version="2.0.0")
Different package name
cv2 = optional_import("cv2", package_name="opencv-python")
Source code in packages/lucid-sdk/lucid_sdk/imports.py
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 | |
lucid_sdk.imports.OptionalDependency
Utility class for checking optional dependency availability.
Provides static methods for checking and managing optional dependencies.
Example
if OptionalDependency.is_available("presidio_analyzer"): from presidio_analyzer import AnalyzerEngine analyzer = AnalyzerEngine() else: analyzer = MockAnalyzer()
Get all available dependencies
available = OptionalDependency.list_available()
Source code in packages/lucid-sdk/lucid_sdk/imports.py
261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 | |
get_version(module_name)
staticmethod
Get the version of an installed module.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
module_name
|
str
|
The module to check. |
required |
Returns:
| Type | Description |
|---|---|
Optional[str]
|
Version string or None if not available. |
Source code in packages/lucid-sdk/lucid_sdk/imports.py
299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | |
is_available(module_name)
staticmethod
Check if a module is available.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
module_name
|
str
|
The module to check. |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if the module is available and importable. |
Source code in packages/lucid-sdk/lucid_sdk/imports.py
277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 | |
list_available()
staticmethod
List all available optional dependencies.
Returns:
| Type | Description |
|---|---|
Dict[str, str]
|
Dict mapping module names to their versions. |
Source code in packages/lucid-sdk/lucid_sdk/imports.py
318 319 320 321 322 323 324 325 326 327 328 329 | |
list_missing()
staticmethod
List all missing optional dependencies.
Returns:
| Type | Description |
|---|---|
Dict[str, str]
|
Dict mapping module names to the reason they're missing. |
Source code in packages/lucid-sdk/lucid_sdk/imports.py
331 332 333 334 335 336 337 338 339 340 341 342 | |
require(module_name, feature='this feature')
staticmethod
Require a module, raising an error if not available.
Use this when a feature absolutely requires a dependency.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
module_name
|
str
|
The module that is required. |
required |
feature
|
str
|
Description of the feature that requires it. |
'this feature'
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If the module is not available. |
Source code in packages/lucid-sdk/lucid_sdk/imports.py
344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 | |
lucid_sdk.imports.requires_dependency(module_name, fallback_result=None, feature=None)
Decorator that makes a function require an optional dependency.
If the dependency is not available, the function either: - Returns the fallback_result (if provided) - Raises ImportError (if no fallback)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
module_name
|
str
|
The required module name. |
required |
fallback_result
|
Any
|
Value to return if dependency is missing. |
None
|
feature
|
Optional[str]
|
Description of the feature for error messages. |
None
|
Returns:
| Type | Description |
|---|---|
Callable
|
Decorator function. |
Example
@requires_dependency("presidio_analyzer", fallback_result=[]) def detect_pii(text: str) -> List[dict]: from presidio_analyzer import AnalyzerEngine analyzer = AnalyzerEngine() results = analyzer.analyze(text=text, language="en") return [r.to_dict() for r in results]
Source code in packages/lucid-sdk/lucid_sdk/imports.py
364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 | |
Pre-defined Fallbacks
These fallback configurations are available for common auditor dependencies:
| Fallback | Package | Description |
|---|---|---|
FALLBACK_PRESIDIO |
presidio_analyzer |
PII detection |
FALLBACK_LLM_GUARD |
llm_guard |
Input/output guardrails |
FALLBACK_DETECT_SECRETS |
detect_secrets |
Secret detection |
FALLBACK_FAIRLEARN |
fairlearn |
Fairness metrics |
FALLBACK_RAGAS |
ragas |
RAG evaluation |
Standard Claim Types
Pre-defined claim types for common audit controls, aligned with RATS RFC 9334.
lucid_sdk.claim_types.PIIDetectionClaim
Factory for PII detection claims.
Used by pii-compliance auditor for GDPR, HIPAA, CCPA compliance.
Example
claim = PIIDetectionClaim.create( entities_found=[ {"type": "SSN", "start": 10, "end": 21, "score": 0.99}, {"type": "EMAIL", "start": 30, "end": 50, "score": 0.95}, ], redacted=True, jurisdiction="US", compliance_framework="HIPAA", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 | |
create(entities_found, confidence=0.95, redacted=False, jurisdiction=None, phase='request', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a PII detection claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
entities_found
|
List[Dict[str, Any]]
|
List of detected PII entities with type, position, score. |
required |
confidence
|
float
|
Overall confidence in detection. |
0.95
|
redacted
|
bool
|
Whether PII was redacted. |
False
|
jurisdiction
|
Optional[str]
|
Applicable jurisdiction (US, EU, IN, etc.). |
None
|
phase
|
str
|
Lifecycle phase (request, response). |
'request'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
compliance_framework
|
Optional[str]
|
Framework (GDPR, HIPAA, CCPA, DPDP). |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 | |
none_found(phase='request', nonce=None)
classmethod
Create a claim indicating no PII was found.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
phase
|
str
|
Lifecycle phase. |
'request'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance indicating no PII. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 | |
lucid_sdk.claim_types.ToxicityClaim
Factory for toxicity detection claims.
Used by guardrails auditor for content safety.
Example
claim = ToxicityClaim.create( score=0.85, categories=["hate_speech", "harassment"], threshold=0.7, exceeded_threshold=True, compliance_framework="EU_AI_ACT", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 | |
create(score, categories=None, threshold=0.7, exceeded_threshold=None, category_scores=None, phase='response', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a toxicity detection claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
score
|
float
|
Overall toxicity score (0-1). |
required |
categories
|
Optional[List[str]]
|
List of detected toxicity categories. |
None
|
threshold
|
float
|
Threshold used for evaluation. |
0.7
|
exceeded_threshold
|
Optional[bool]
|
Whether score exceeded threshold. |
None
|
category_scores
|
Optional[Dict[str, float]]
|
Per-category scores. |
None
|
phase
|
str
|
Lifecycle phase. |
'response'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 | |
lucid_sdk.claim_types.InjectionDetectionClaim
Factory for injection detection claims.
Used by guardrails auditor for prompt injection defense.
Example
claim = InjectionDetectionClaim.create( detected=True, injection_type="jailbreak", score=0.92, pattern_matched="ignore previous instructions", compliance_framework="EU_AI_ACT", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 | |
create(detected, injection_type=None, score=0.0, pattern_matched=None, phase='request', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create an injection detection claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
detected
|
bool
|
Whether injection was detected. |
required |
injection_type
|
Optional[str]
|
Type of injection (direct, indirect, jailbreak). |
None
|
score
|
float
|
Detection confidence score. |
0.0
|
pattern_matched
|
Optional[str]
|
Pattern or content that triggered detection. |
None
|
phase
|
str
|
Lifecycle phase. |
'request'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 | |
lucid_sdk.claim_types.SecretDetectionClaim
Factory for secret/credential detection claims.
Used by secrets auditor for credential leak prevention.
Example
claim = SecretDetectionClaim.create( secrets_found=[ {"type": "aws_key", "line": 5, "redacted": True}, {"type": "api_key", "line": 12, "redacted": True}, ], compliance_framework="PCI_DSS", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | |
create(secrets_found, redacted=False, phase='request', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a secret detection claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
secrets_found
|
List[Dict[str, Any]]
|
List of detected secrets with type and position. |
required |
redacted
|
bool
|
Whether secrets were redacted. |
False
|
phase
|
str
|
Lifecycle phase. |
'request'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 | |
lucid_sdk.claim_types.GroundednessClaim
Factory for RAG groundedness claims.
Used by rag-quality auditor to verify responses are grounded in sources.
Example
claim = GroundednessClaim.create( score=0.92, cited_sources=3, hallucination_detected=False, )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 | |
create(score, cited_sources=0, total_claims=0, supported_claims=0, hallucination_detected=False, threshold=0.8, phase='response', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a groundedness claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
score
|
float
|
Groundedness score (0-1). |
required |
cited_sources
|
int
|
Number of sources cited. |
0
|
total_claims
|
int
|
Total claims in the response. |
0
|
supported_claims
|
int
|
Number of claims with source support. |
0
|
hallucination_detected
|
bool
|
Whether hallucination was detected. |
False
|
threshold
|
float
|
Threshold for acceptable groundedness. |
0.8
|
phase
|
str
|
Lifecycle phase. |
'response'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 | |
lucid_sdk.claim_types.FairnessClaim
Factory for bias/fairness claims.
Used by fairness auditor for EU AI Act Art.10, Colorado 6-1-1703(1).
Example
claim = FairnessClaim.create( demographic_parity=0.85, equalized_odds=0.78, protected_attributes=["gender", "age"], threshold=0.8, compliance_framework="EU_AI_ACT", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 | |
create(demographic_parity=None, equalized_odds=None, disparate_impact_ratio=None, protected_attributes=None, group_metrics=None, threshold=0.8, phase='response', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a fairness metrics claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
demographic_parity
|
Optional[float]
|
Demographic parity score. |
None
|
equalized_odds
|
Optional[float]
|
Equalized odds score. |
None
|
disparate_impact_ratio
|
Optional[float]
|
80% rule ratio. |
None
|
protected_attributes
|
Optional[List[str]]
|
List of protected attributes evaluated. |
None
|
group_metrics
|
Optional[Dict[str, Dict[str, float]]]
|
Per-group metric breakdowns. |
None
|
threshold
|
float
|
Threshold for acceptable fairness. |
0.8
|
phase
|
str
|
Lifecycle phase. |
'response'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
compliance_framework
|
Optional[str]
|
Framework (EU_AI_ACT, CCPA_ADMT, etc.). |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 | |
lucid_sdk.claim_types.WatermarkClaim
Factory for AI watermark/provenance claims.
Used by watermark auditor for EU AI Act Art.50 and other provenance requirements.
Example
claim = WatermarkClaim.create( watermark_embedded=True, watermark_type="statistical", detectable=True, compliance_framework="EU_AI_ACT", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 | |
create(watermark_embedded, watermark_type=None, detectable=True, detection_score=None, c2pa_signed=False, phase='response', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a watermark/provenance claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
watermark_embedded
|
bool
|
Whether watermark was embedded. |
required |
watermark_type
|
Optional[str]
|
Type of watermark (statistical, c2pa, synthid). |
None
|
detectable
|
bool
|
Whether watermark is detectable. |
True
|
detection_score
|
Optional[float]
|
Detection confidence score. |
None
|
c2pa_signed
|
bool
|
Whether C2PA provenance was added. |
False
|
phase
|
str
|
Lifecycle phase. |
'response'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 | |
lucid_sdk.claim_types.ModelSecurityClaim
Factory for model security claims.
Used by model-security auditor for artifact safety.
Example
claim = ModelSecurityClaim.create( format_valid=True, hash_verified=True, no_malware=True, provenance_verified=True, compliance_framework="EU_AI_ACT", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 | |
create(format_valid, hash_verified, no_malware, provenance_verified, model_hash=None, format_type=None, vulnerabilities=None, phase='artifact', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a model security claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format_valid
|
bool
|
Whether model format is valid (safetensors). |
required |
hash_verified
|
bool
|
Whether hash matches manifest. |
required |
no_malware
|
bool
|
Whether scan found no malware. |
required |
provenance_verified
|
bool
|
Whether provenance signature is valid. |
required |
model_hash
|
Optional[str]
|
SHA-256 hash of model. |
None
|
format_type
|
Optional[str]
|
Model format (safetensors, pytorch, etc.). |
None
|
vulnerabilities
|
Optional[List[Dict[str, Any]]]
|
List of any vulnerabilities found. |
None
|
phase
|
str
|
Lifecycle phase. |
'artifact'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 | |
lucid_sdk.claim_types.SovereigntyClaim
Factory for data sovereignty claims.
Used by sovereignty auditor for GDPR Art.44-49, India DPDP §17.
Example
claim = SovereigntyClaim.create( data_location="EU", allowed_locations=["EU", "US"], cross_border_transfer=False, compliant=True, compliance_framework="GDPR", )
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 | |
create(data_location, allowed_locations, cross_border_transfer=False, transfer_mechanism=None, compliant=True, user_jurisdiction=None, phase='request', nonce=None, compliance_framework=None, control_id=None)
classmethod
Create a data sovereignty claim.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data_location
|
str
|
Where data is being processed. |
required |
allowed_locations
|
List[str]
|
List of allowed processing locations. |
required |
cross_border_transfer
|
bool
|
Whether data crosses borders. |
False
|
transfer_mechanism
|
Optional[str]
|
Legal mechanism for transfer (SCC, adequacy, etc.). |
None
|
compliant
|
bool
|
Whether sovereignty rules are met. |
True
|
user_jurisdiction
|
Optional[str]
|
User's jurisdiction. |
None
|
phase
|
str
|
Lifecycle phase. |
'request'
|
nonce
|
Optional[str]
|
Optional anti-replay nonce. |
None
|
compliance_framework
|
Optional[str]
|
Framework (GDPR, DPDP, PIPL, etc.). |
None
|
Returns:
| Type | Description |
|---|---|
Claim
|
Claim instance. |
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 | |
Claim Categories
lucid_sdk.claim_types.ClaimCategory
Bases: str, Enum
Categories for audit claims.
Source code in packages/lucid-sdk/lucid_sdk/claim_types.py
39 40 41 42 43 44 45 46 47 48 49 50 51 | |
Testing Utilities
The lucid_sdk.testing module provides shared fixtures and helpers for auditor testing.
Pytest Fixtures
# In conftest.py
from lucid_sdk.testing import pytest_plugins
# Or import specific fixtures
from lucid_sdk.testing import (
mock_config,
mock_http_factory,
test_client,
sample_request_data,
sample_response_data,
)
lucid_sdk.testing.fixtures.MockConfig
dataclass
Mock configuration for testing auditors.
Provides default values that work for most test scenarios.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | |
__getattr__(name)
Allow accessing any attribute (returns None for undefined).
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
63 64 65 | |
lucid_sdk.testing.fixtures.MockHTTPClientFactory
Mock HTTP client factory for testing without network calls.
All HTTP operations are mocked and can be inspected.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 | |
chain_call(next_auditor_url, data, lucid_context)
async
Mock chain call - records call and returns configurable response.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 | |
close()
async
Mock close - no-op.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
95 96 97 | |
get_chain_client()
async
Return a mock chain client.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
91 92 93 | |
get_client()
async
Return a mock HTTP client.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
87 88 89 | |
post_with_retry(url, json_data, max_retries=3, timeout=None)
async
Mock POST with retry - records call and returns mock response.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 | |
reset()
Reset all recorded calls.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
163 164 165 166 167 168 169 | |
submit_evidence(auditor_id, model_id, session_id, nonce, decision, metadata, phase='request')
async
Mock evidence submission - records call and returns success.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | |
lucid_sdk.testing.fixtures.MockAuditor
Mock auditor for testing chains and endpoints.
Can be configured to return specific results for each phase.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 | |
reset()
Reset all recorded calls and responses.
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
234 235 236 237 238 239 240 241 242 243 | |
Test Data Generators
lucid_sdk.testing.helpers.generate_pii_text(*, include_ssn=True, include_email=True, include_phone=False, include_credit_card=False, include_address=False, include_name=False, context='general')
Generate text containing PII for testing PII detection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_ssn
|
bool
|
Include a Social Security Number. |
True
|
include_email
|
bool
|
Include an email address. |
True
|
include_phone
|
bool
|
Include a phone number. |
False
|
include_credit_card
|
bool
|
Include a credit card number. |
False
|
include_address
|
bool
|
Include a street address. |
False
|
include_name
|
bool
|
Include a person's name. |
False
|
context
|
str
|
Context for the text (general, medical, financial). |
'general'
|
Returns:
| Type | Description |
|---|---|
str
|
Text string containing the specified PII types. |
Source code in packages/lucid-sdk/lucid_sdk/testing/helpers.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | |
lucid_sdk.testing.helpers.generate_toxic_text(category='general', severity='medium')
Generate text with toxic content for testing toxicity detection.
Note: This generates mild test cases suitable for automated testing. Real toxic content detection should be tested with curated datasets.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
category
|
str
|
Category of toxicity (general, harassment, profanity). |
'general'
|
severity
|
str
|
Severity level (low, medium, high). |
'medium'
|
Returns:
| Type | Description |
|---|---|
str
|
Text string with toxic content indicators. |
Source code in packages/lucid-sdk/lucid_sdk/testing/helpers.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 | |
lucid_sdk.testing.helpers.generate_injection_text(injection_type='direct', include_payload=True)
Generate text with injection patterns for testing injection detection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
injection_type
|
str
|
Type of injection (direct, indirect, jailbreak, encoding). |
'direct'
|
include_payload
|
bool
|
Whether to include a payload after the injection. |
True
|
Returns:
| Type | Description |
|---|---|
str
|
Text string with injection patterns. |
Source code in packages/lucid-sdk/lucid_sdk/testing/helpers.py
127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | |
lucid_sdk.testing.helpers.generate_secret_text(secret_type='api_key', context='code')
Generate text containing secrets for testing secret detection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
secret_type
|
str
|
Type of secret (api_key, aws_key, github_token, password). |
'api_key'
|
context
|
str
|
Context (code, config, message). |
'code'
|
Returns:
| Type | Description |
|---|---|
str
|
Text string containing secret patterns. |
Source code in packages/lucid-sdk/lucid_sdk/testing/helpers.py
156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 | |
lucid_sdk.testing.helpers.generate_clean_text(length='medium', topic='general')
Generate clean text with no safety issues for testing false positives.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
length
|
str
|
Length of text (short, medium, long). |
'medium'
|
topic
|
str
|
Topic of text (general, technical, casual). |
'general'
|
Returns:
| Type | Description |
|---|---|
str
|
Clean text string that should not trigger any detections. |
Source code in packages/lucid-sdk/lucid_sdk/testing/helpers.py
190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | |
Assertion Helpers
lucid_sdk.testing.fixtures.assert_proceed(result, data_contains=None)
Assert result is PROCEED with optional data check.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
Any
|
The AuditResult to check. |
required |
data_contains
|
Optional[Dict[str, Any]]
|
Optional dict of key-value pairs that must be in result.data. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 | |
lucid_sdk.testing.fixtures.assert_deny(result, reason_contains=None)
Assert result is DENY with optional reason check.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
Any
|
The AuditResult to check. |
required |
reason_contains
|
Optional[str]
|
Optional substring that must be in the reason. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
436 437 438 439 440 441 442 443 | |
lucid_sdk.testing.fixtures.assert_warn(result, reason_contains=None)
Assert result is WARN with optional reason check.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
Any
|
The AuditResult to check. |
required |
reason_contains
|
Optional[str]
|
Optional substring that must be in the reason. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
446 447 448 449 450 451 452 453 | |
lucid_sdk.testing.fixtures.assert_redact(result, modifications_contain=None)
Assert result is REDACT with optional modifications check.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
result
|
Any
|
The AuditResult to check. |
required |
modifications_contain
|
Optional[Dict[str, Any]]
|
Optional dict of expected modifications. |
None
|
Source code in packages/lucid-sdk/lucid_sdk/testing/fixtures.py
456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 | |