General
Home
Product
How It Works The Kernel The Brain Technical Docs Live Proof Security
Company
Platform (Archive) Products (Archive) Investors Contact
Get Started →
Brain 4.0 · Live

Every AI decision.
Checked. Signed. Proved.

Brain 4.0 — a governed cognitive architecture. AI agents that reason in documented chains, use tools under authority governance, collaborate in parallel, and face adversarial review before every high-value action. The Kernel checks every decision before it executes and produces a signed certificate proving it was governed. No exceptions.

ADM-x7k2p9q1 CONFORMANT
value_limitPASS
authority_levelPASS
risk_scorePASS
match_thresholdPASS
daily_cumulativePASS
temporalPASS
cardinalityPASS
conditionalPASS
minimum_activePASS
HMAC-SHA256
a7f3e29c8b1d4a6e9f0c2b5d8a3e7f1c4d9e2a6b8c1f5e3d7a0b4c9e2f6a1d3...
9
checks on every
single action
~27μs
per verification
100%
fail-closed — blocked
if kernel can't verify
78
AI agents across
16 departments
8
industry packs
pre-built
The Problem

The problem nobody
has solved yet

Every company deploying AI faces the same gap: proof.

In 2023, Air Canada's AI chatbot told a grieving passenger he could claim a bereavement discount after his trip. That wasn't the policy. A tribunal ruled Air Canada was liable. The airline argued the chatbot was a separate entity responsible for its own words. The tribunal disagreed.

That case was a chatbot that just answered questions. Now imagine AI agents that don't just answer — they act. They bind insurance policies. They settle claims. They move money. They commit code. If something goes wrong, the question isn't "what did your AI do?" — it's "can you prove it was authorised to do it?"

Most AI agents have no built-in rulebook. They try their best. But "tried their best" isn't good enough for a regulator, a tribunal, or a Lloyd's syndicate.

Regulators aren't asking "did your AI try to follow the rules?" They're asking "can you prove it followed the rules, every time, for every action?" That's the gap. The Kernel closes it.

The Solution

We built two things to fix this

A governance layer that sits between AI agents and the real world. And a complete AI workforce — 78 agents, 16 departments — that comes pre-governed from day one.

Product 01

The Kernel

The bouncer. Every AI decision goes through the Kernel before it executes. 9 mathematical checks — value, authority, risk, timing, and more. If the action is allowed, it gets a signed certificate. If not, it's blocked. No exceptions. Works with any AI.

Learn about the Kernel
Product 02

The Brain — v4.0

A complete governed cognitive architecture. 78 agents across 16 departments — with Brain 4.0 capabilities: reasoning chains that document every step, governed tool use with HMAC-signed invocations, multi-agent parallel dispatch with specialist synthesis, adversarial LLM review on high-value actions, evidence grading (A–E), and 76 Lean 4 theorems + 35 Z3 properties as formal verification of the kernel.

Learn about the Brain
Brain 4.0

A governed cognitive architecture

Brain 4.0 adds four capability layers above the Kernel. Every layer is governed. Every action is documented.

Intelligence

Reasoning That Documents Itself

  • Multi-step reasoning chains — each step logged with intermediate results
  • 5 cognitive archetypes — Guardian, Analyst, Executor, Strategist, Diplomat
  • Knowledge RAG with source citations and evidence grade metadata
Governance

Authority-Checked Tool Use

  • Governed tool invocations — authority level enforced, HMAC-signed
  • Evidence grading (A–E) with minimum-grade enforcement per action type
  • Adversarial LLM review — Haiku challenger on high-value actions
Collaboration

Parallel Specialist Dispatch

  • Multi-agent parallel dispatch with read-only specialist pattern
  • Disagreement detection with automatic human-hold escalation
  • Coordinator synthesis through full governance pipeline
Infrastructure

Formally Verified Foundation

  • 76 Lean 4 theorems — zero sorry, zero axioms
  • 35 Z3 SMT properties — P1–P35 covering temporal, relationship, ratio, and Ring 1 invariants
  • Conformal prediction for calibrated confidence intervals
How It Works

Three steps. Every time. No exceptions.

Step 01
🤖
An AI agent wants to act

Bind a policy. Settle a claim. Approve a loan. The agent has made a decision and is ready to execute it.

Step 02
The Kernel checks it

9 mathematical checks in ~27 microseconds. Value limits. Authority levels. Risk scores. Time windows. All 9, every time.

Step 03
You get a signed certificate

Proof the action was governed. HMAC-SHA256 signed. Tamper-evident. If any check failed — the action is blocked, not delayed.

Who It's For

Built for regulated industries

Anywhere an AI makes a consequential decision, you need proof it was governed.

🏛️

Insurance companies

AI underwriters and claims handlers working under Kernel governance. Signed evidence for every decision. FCA AI governance ready.

Lloyd's DA FCA Consumer Duty
🏦

Banks & lenders

Lending agents that can't exceed their authority. Credit decisions with a paper trail. Know-your-customer compliance built in.

Lending limits KYC EU AI Act
💻

Engineering teams

AI coding agents that write code, run tests, and open PRs — all checked before they touch your codebase. No rogue commits.

Code authority Risk gates Audit trail
⚖️

Legal & compliance teams

AI legal research grounded in BAILII and UK statute law. Contract analysis with red-flag detection. Document pipeline that produces risk scores in seconds.

UK GDPR BAILII Legal AI built-in
Get Started

Ready to govern your AI?

Whether you have existing AI agents or need to build from scratch, we'll have you producing signed governance certificates within a week.