Lexos is a deterministic execution and policy enforcement layer for AI-driven systems operating in regulated environments. It treats AI outputs as untrusted requests and only allows execution when predefined, auditable policies are satisfied.
About Lexos
Lexos provides Swiss-engineered enterprise infrastructure for deterministic accountability in autonomous AI systems. We decouple AI reasoning from physical execution using a validation proxy that enforces Policy-as-Code.
Core Capabilities
- Deterministic Execution: Validation of AI actions against legal bounds before real-world impact.
- Human-in-the-loop: Native escalation paths for high-risk decisions.
- Auditability: Tamper-evident records exportable as ISO 19005 (PDF/A) evidence.
- Compliance: Technical framework aligned with the EU AI Act and FINMA guidelines.
What Lexos is not
- Not cloud infrastructure or compute
- Not an AI model or LLM safety layer
- Not monitoring, logging, or observability
- Not prompt filtering or probabilistic guardrails
AI Act Compliance Sprint
Lexos offers an outcome-driven AI Act Compliance Sprint for organizations deploying high-risk AI systems. The sprint delivers execution-time controls, deterministic audit logs, and auditor-ready evidence aligned with EU AI Act Articles 12 and 14.
Frequently Asked Questions
What is deterministic AI accountability?
Deterministic accountability refers to an architectural guarantee that actions proposed by an autonomous or AI-driven system are evaluated against immutable, pre-defined logic (Policy-as-Code) before execution, at the execution and network layer. Unlike LLM guardrails or prompt filtering, which are probabilistic and model-dependent, Lexos enforces compliance independently of the AI model, producing explicit allow/deny decisions that are auditable and reproducible.
Is Lexos an AI model or an LLM safety tool?
No. Lexos does not modify, monitor, or evaluate the internal behavior of AI models. It operates outside the model, at the execution layer, treating AI outputs as untrusted requests. This design makes Lexos model-agnostic and suitable for environments where probabilistic controls alone are insufficient for legal or regulatory accountability.
How is Lexos different from prompt filtering or guardrails?
Prompt filtering and guardrails attempt to influence or constrain model outputs, which remain probabilistic and non-deterministic. Lexos does not attempt to control what an AI system says. Instead, it controls what the system is allowed to do, enforcing policies at the execution and network level where actions can be deterministically approved or rejected.
How does Lexos support EU AI Act compliance?
Lexos provides a technical governance and execution control layer designed to support compliance obligations for High-Risk AI Systems under the EU AI Act. In particular, Lexos enables: Human oversight (Article 14) by enforcing explicit approval or policy conditions before execution; Accuracy and robustness safeguards (Article 15) through deterministic decision enforcement; and Automatic record-keeping (Article 12) via immutable, auditable execution logs.
What kind of evidence does Lexos produce?
Lexos produces immutable decision and execution logs that document policy evaluation outcomes, timestamps, conditions, and approvals. These artifacts are designed to be auditor-readable and can be exported in long-term preservation formats such as ISO 19005 (PDF/A) for regulatory, legal, and compliance use cases.
Why use ISO 19005 (PDF/A) for audit logs?
PDF/A (ISO 19005) is the international standard for long-term electronic document preservation. By exporting decision and execution logs in this format, Lexos ensures audit artifacts are self-contained, human-readable, tamper-evident, and legally admissible for decades without proprietary software. While JSON is useful for systems, PDF/A provides a court-ready evidence format.
Does Lexos block AI actions automatically?
Lexos enforces deterministic policies that explicitly define when an action is allowed, denied, or requires human approval. Blocking is not heuristic or statistical. Each decision is the result of a predefined rule evaluated at execution time, producing an auditable allow or deny outcome.
Does Lexos require access to model inputs or training data?
No. Lexos does not require access to prompts, training data, or model internals. It evaluates execution requests and contextual metadata only, making it suitable for use with proprietary, third-party, or closed-source AI models.