

"Can organisations trust the decisions their AI systems are making?"
The SENTINEL Approach
SENTINEL operates directly in the execution path of AI systems—where decisions are made and actions are triggered. It evaluates, controls, and, when necessary, blocks decisions in real time—before they materialise into outcomes. SENTINEL ensures that: No AI decision executes without meeting defined ethical, operational, and authority conditions Every action is traceable, auditable, and provably aligned Risk is prevented at the moment it emerges, not reported after the fact In a landscape of interconnected, high-velocity AI systems, controlling execution is the only reliable form of risk management. Traditional GRC Systems GRC systems document policies, controls, and risks around systems. They produce reports after the fact, but do not control how AI behaves in real time. SENTINEL SENTINEL operates inside the systems themselves, controlling whether AI decisions are allowed to execute in the first place. Even if every policy document disappeared, AITru would continue to function because it governs the behaviour of AI directly. SENTINEL, our first product, is not a compliance system, but it enables compliance outcomes! Enabling Trust at AI Scale As AI becomes embedded across every industry, organisations face a critical challenge: how to scale AI safely without losing control. AITru enables organisations to deploy AI confidently by ensuring that: Decisions remain truthful Automation remains aligned with human authority Risks are prevented before they cascade AI systems remain explainable and accountable The result is AI that organisations can truly trust. SENTINEL exists to make trustworthy AI possible at enterprise scale.
AITru Management Team
Shivraj Gohil (CEO) Shivraj is a senior transformation and change leader with a 20-year track record delivering complex technology, operational, and organisational transformation for global enterprises including IBM, Microsoft, KPMG, Barclays, HSBC, Credit Suisse, Pacific Life Re, Ageas, and European Central Bank. He has extensive experience working at senior and board level, aligning strategy with large-scale delivery, modernising operating models, and driving measurable business outcomes across highly regulated, mission-critical environments. As Co-Founder and CEO of AITru Solutions—the company behind CITADEL, the world’s first constitutional governance architecture for enterprise AI — Shivraj brings a rare combination of enterprise transformation expertise and cutting-edge AI governance leadership, enabling organisations to scale AI safely, strategically, and with full constitutional integrity. Steve Butler Chief Artificial Intelligence Governance Officer (CAIGO) Steve is Co-Founder and CAIGO of AITru Solutions and responsible for innovation and AI research. A globally recognised expert in portfolio and PMO leadership, he is one of the rare figures who bridges enterprise change, complexity science, and emerging AI governance. A certified AI consultant with a long track record stabilising complex delivery environments, Steve has advised and led transformation across HSBC, Dyson, the Financial Times, Credit Suisse, Quilter, Allen and Overy, the FCA, and Ultra Electronics, backed by more decades of portfolio management and governance experience. Steve is also a published author of multiple books and a frequent lecturer and keynote speaker on complexity, strategy, and the future of AI governance, with appearances across global conferences and universities. His work shifts AI out of the realm of reactive automation and into proactive, self-questioning cognition that protects organisations from drift, collapse, and epistemic pollution. He builds environments where AI does not simply act fast, it acts constitutionally." Brian Heale (COO) Brian is a highly accomplished executive and thought leader with 40 years of experience in the global insurance, reinsurance and related software sector. He has worked with several large insurers and was a Managing Consultant at EY. For the last 20 years he has been working with technology companies such Oracle, Moodys, Towers Watson and Sapiens designing, marketing and selling enterprise software solutions to the global Insurance market. He specialises in complex insurance legacy systems transformations, actuarial and accounting systems and regulatory compliance programs - including Solvency II and IFRS 17. Brian brings together expertise in product development, marketing, sales and implementation of complex software implementation. For the last year he has been party to setting-up EGaaS and is now at at the forefront of implementing Constitutional AI to solve the industry's most urgent problem: the Crisis of AI Trust and the resulting Governance Gap. Krum Dimitrov – Chief Technology Officer (CTO) Krum is a seasoned technology leader and systems architect with deep expertise in building scalable, secure, and AI-enabled enterprise platforms. He designs and leads the engineering vision for SENTINEL, combining cloud-native infrastructure, multi-agent AI, secure data flows, and robust auditability to ensure that AI-driven systems adhere to constitutional governance and regulatory standards. With a background in distributed systems, DevOps, data engineering and AI integration, Krum oversees the end-to-end technical architecture—from ingestion of structured and unstructured data, through reasoning and decision pipelines, to compliant audit trails and “truth-by-design” outputs. His role ensures that SENTINEL remains robust, reliable, and future-proof, enabling clients in highly regulated sectors (insurance, banking, healthcare, telco, re/insurance) to deploy AI with confidence and liability-safe transparency. As CTO, Krum bridges strategic ambition and technical execution: he turns the theoretical framework and governance philosophy of Constitutional AI into production-grade systems that deliver measurable business value, regulatory compliance and operational resilience.
In a world where AI operates at machine speed, oversight must operate faster.SENTINEL NEVER SLEEPS
SENTINEL
Real-Time Control for Enterprise AI Decisions Enterprises are rapidly embedding artificial intelligence into operations, products, and decision processes. Yet most organisations still rely on traditional governance tools designed for slower, human-driven systems. The problem is simple. AI systems now generate decisions faster than conventional risk, compliance, and oversight frameworks can respond. SENTINEL solves this Problem SENTINEL is the real-time constitutional control layer based on AITru's CITADEL architecture. It sits directly in the execution path of AI systems and determines whether an AI-generated action is allowed to proceed. If the action violates organisational policy, ethical boundaries, or operational constraints, SENTINEL intervenes immediately. This means organisations move from documenting AI risk after the fact to controlling AI behaviour before it causes harm. Why SENTINEL Matters Modern enterprises operate with dozens or hundreds of AI systems including: Internal models Large language models Agent networks SaaS-embedded AI Automated workflows external model APIs These systems interact at machine speed, often outside traditional oversight structures. Without runtime governance, small errors or contaminated outputs can cascade rapidly across systems. SENTINEL acts as the control layer that prevents those cascades. AI failure is not linear. It cascades across systems- one incorrect AI output can propagate across underwriting, pricing, and capital models”
SENTINEL's Capabilities
Real-Time Decision Enforcement Validates every AI decision at the moment of execution — blocking or escalating actions that violate policy, ethics, or authority boundaries. Universal AI Coverage Governs all AI across the enterprise: internal agents, LLM copilots, third-party SaaS tools, automation platforms, and external APIs. Runtime Failure Prevention Stops hallucinations, misaligned optimisation, rogue execution, and unauthorised actions before they propagate. Constitutional Governance via CITADEL Operates within CITADEL alongside CORTA, PACED, EIP, NOS, and TruthOps — enforcing lawful, aligned, and provable AI actions. Why Does This Matter ? When AI Cannot Be Stopped, Risk Cannot Be Managed AI systems now act at machine speed. If an organisation cannot prevent an AI decision from executing, it does not control the risk, it inherits the liability. The "Black Box" Liability Decisions occur in opaque neural networks with zero visibility into the reasoning process, leaving you unable to justify outcomes to stakeholders.the reasoning process. Silent IP Leakage Without an Epistemic Firewall*,your most sensitive data and proprietary "Secret Sauce" are exposed to public models through uncontrolled prompts. Unchecked Hallucinations Confidence is not truth. Critical errors and fabricated facts slip past human review and enter production, threatening reputation and revenue. Broken Decision Provenance When things go wrong, the audit trail is missing. Tracing a specific failure back to its root cause becomes impossible. Regulatory Exposure New laws demand transparency. Without a verifiably immutable log, you face fines and sanctions for compliance you cannot prove. Agentic AI Chaos Autonomous agents making decisions without oversight create cascading failures. Without control frameworks like CORTA and ARG, agent systems become uncontrollable. With SENTINEL, AI actions are only permitted when they are safe, lawful, and aligned with the organisation's objectives — shifting governance from documentation to real-time operational control.
Key Business Benefits for Executives
Prevent catastrophic AI errors Decisions are validated before execution rather than investigated after damage occurs. Maintain human authority AI cannot self-authorise actions beyond its defined limits. Protect enterprise reputation and compliance Every decision is traceable, explainable, and auditable. Enable safe AI scale Organisations can deploy more AI systems with confidence because governance operates automatically at runtime. Create trusted automation AI becomes a controlled asset rather than an unmanaged risk. The Strategic Outcome SENTINEL transforms AI governance from documentation and oversight into real-time operational control. It effectively becomes a regulatory firewall for AI decisions! Instead of asking: "Was this AI decision compliant?" Executives can ask a more powerful question: "Was the decision allowed to happen at all?" With SENTINEL, the answer is always yes only if it is safe, lawful, and aligned with the organisation's objectives.
SENTINEL Control Panel
The Command Centre for Enterprise AI Decisions AI is now making decisions across copilots, SaaS platforms, automated workflows, and internal systems. Yet most organisations cannot see what those systems are doing, why they are doing it, or whether those actions comply with company policy. The SENTINEL Control Panel gives organisations real-time control over AI decision execution. Every AI prompt and action is intercepted, verified, and assessed before it is allowed to proceed. Decisions can be blocked, flagged, or approved instantly, ensuring that unsafe, non-compliant, or unauthorised actions never execute. Unlike traditional monitoring or GRC tools, SENTINEL operates inside the execution path itself, enforcing governance at the moment decisions are made. SENTINEL also offers the ability to users to suggest alternative prompts that might provide better results. CONTROL Panel View This provides a live operational view of AI activity across the enterprise, with full traceability, intervention history, and verification insight, enabling organisations to move fast without losing control. Real-Time Executive Visibility The Control Panel provides leadership with a live view of AI activity across the organisation. Senior management can immediately see: Who is using AI systems What prompts are being issued Which AI models or tools are involved What actions the AI intends to take Whether the request complies with company policy This gives leadership operational awareness of AI usage across the enterprise as it happens. Runtime Control of AI Decisions SENTINEL operates directly inside the execution path of AI systems. Before an AI action is executed, the platform evaluates the request against internal policies, authority limits, and governance rules. If the action falls outside approved parameters, it can be flagged for review, modified, or prevented from executing. This ensures AI decisions cannot occur without organisational oversight. Guided Prompt Compliance When users issue prompts to AI systems, the platform evaluates them against internal governance policies. If a prompt is likely to breach policy, the Control Panel can suggest a revised prompt that remains compliant while still achieving the intended outcome. Employees continue to benefit from AI productivity while the organisation maintains responsible usage. Complete Audit Trail for AI Decisions Every AI interaction is recorded with a full operational audit trail. The Control Panel captures: The prompt issued The individual or system that initiated it The AI model involved The decision requested Any approvals or escalations The final execution outcome All activity is timestamped and traceable. This provides a clear evidentiary record of how AI decisions were governed. Assurance for Boards, Risk Teams, and Auditors The Control Panel gives organisations the transparency needed to demonstrate responsible AI management. Senior Leaders Gain confidence that AI activity is visible and controlled. Risk & Compliance Teams Gain operational oversight of how AI is being used. Auditors Gain a verifiable record showing how AI decisions were initiated, reviewed, and authorised.
We Check & Enforce Debate
Most AI governance tools observe outcomes after a decision is made. SENTINEL governs the reasoning before a decision is allowed to execute. Every AI output is internally challenged, contradicted, and verified in real time. If it cannot defend its reasoning, it does not act. Input Data enters the system Harm Gate Initial safety screening Opposition Network Mandatory internal debate TruthOps Verification Multi-layer validation Validated Output SENTINEL introduces a new model of AI control designed specifically for the synthetic intelligence era. Instead of managing documents about AI risk, AITru governs how AI behaves at runtime. Real-Time Decision Control AI decisions are evaluated before execution to ensure they comply with organisational policy, authority boundaries, and ethical constraints. Truth Verification Information entering AI systems is verified to prevent hallucinations, misinformation, or synthetic contamination. Enterprise-Wide AI Oversight AITru observes reasoning and behaviour across all AI systems operating within the enterprise ecosystem. Safe AI Execution Automated actions are permitted only when they meet defined governance and verification standards. Continuous Cognitive Integrity AI systems remain aligned with organisational objectives, trusted information sources, and operational constraints. SENTINEL ensures you move from compliance reporting to compliance enforcement
How SENTINEL Enables Regulatory Outcomes
Regulations consistently require: Explainability of decisions Auditability and traceability Human oversight and authority control Data integrity and reliability Prevention of bias and unintended outcomes But they do not solve the core problem: How do you stop a non-compliant AI decision before it executes?AITru translates regulatory expectations into real-time enforcement at the point of decision. Pre-Execution Control Decisions are evaluated against policy, authority, and regulatory constraints before execution Explainability & Audit Full reasoning chains are captured, reconstructed, and exposed for audit Data Integrity All inputs are verified, validated, and protected from synthetic contamination Human Authority Enforcement AI cannot act outside defined authority boundaries Continuous Oversight Reasoning, drift, and alignment are monitored across all AI systems in real time Before vs After SENTINEL Before Compliance validated after execution Limited visibility into reasoning Fragmented audit trails Exposure to regulatory breach and fines After Decisions blocked if non-compliant Full decision lineage and proof Continuous audit readiness AI operates within enforceable regulatory boundaries SENTINEL does not replace compliance frameworks. It ensures they actually work.
Reinsurance Sector
Solving the Un-Modellable Risk Managing Unseen Liabilities in a Complex Landscape Reinsurance is the business of risks nobody else wants to touch. Climate change, pandemics, cyber warfare – these are risks that defy conventional models. Traditional AI, trained on historical data, struggles to predict or even comprehend these "black swan" events. The liability of miscalculating these risks is catastrophic. SENTINEL provides the guardrails needed for AI to operate in these uncertain domains, enforcing constitutional AI principles even when the future is opaque. The Pain Underwriting & Claims AI models, trained on past events, are blind to emerging, unprecedented risks. Inaccurate risk assessments lead to underpriced policies or catastrophic payouts. Manual human oversight is too slow to catch rapidly evolving threats. Risk Management & Regulatory Compliance Regulators demand explainability and auditability for AI-driven decisions. Ensuring AI adheres to complex global compliance standards is a constant challenge. Reputational damage from a single AI error in a high-profile event. "In a market defined by the unpredictable, you need an AI that's predictably safe." The Liability Trap
Banking
SENTINEL in Action Preventing Regulatory Disaster in Credit Ensuring Fair Lending and Compliance in Automated Credit Decisions A major retail bank was facing increasing pressure from regulators regarding potential bias in its AI-driven credit approval system. The bank’s internal models were flagging a disproportionate number of applications from certain demographics, risking massive fines and reputational damage. They needed a way to ensure fairness and compliance at scale, without compromising efficiency. Step 1: Automated Initial Review AI flags a high-risk loan application, but the decision lacks clear, auditable reasoning due to complex model interactions. Step 2: SENTINEL Intervention Before execution, SENTINEL intercepts the decision. Its "Opposition Network" internally challenges the AI's opaque reasoning, testing for bias or non-compliance against established ethical and regulatory rules. Step 3: Verified & Approved Action The AI's reasoning is validated, and SENTINEL ensures the decision is fair, compliant, and fully explainable. The loan application is processed with a clear audit trail. The Outcome: Crisis Averted Without SENTINEL Unchecked bias in credit decisions. Massive regulatory fines and penalties. Severe reputational damage and loss of customer trust. Slow, manual investigations after the fact. With SENTINEL Bias detected and mitigated in real-time. Full regulatory compliance maintained automatically. Enhanced trust and a transparent lending process. Proactive prevention, rather than reactive damage control. SENTINEL ensured the bank’s AI adhered to all fair lending laws, preventing a potential regulatory disaster and reinforcing public trust in their automated systems.
Shift from post-incident auditing to proactive, policy-based permissioning: Was this AI decision authorised to execute under strict controls in real-time?