36-Layer Architecture

BrainOS is an AI Operating System — 36 cognitive layers organised into 5 functional planes: Compute, Memory, Reasoning, Learning, and Services. Every layer communicates through an event bus with Lamport clock ordering, deduplication, and backpressure.

ComputeL1–L5

Local LLM, Brain IQ routing, smart auto-scale

MemoryL6–L12

Working memory blackboard, 27-query context mesh, federated knowledge

ReasoningL13–L22

Backward chaining, LATS reflection, adversarial verification, DreamCoder

LearningL23–L29

RL closed loop, dopamine/gaba signals, LoRA fine-tuning, failure memory

ServicesL30–L36

SE-aaS · AaaS · PM-aaS · Process FSM · Overnight Code Agent

Intelligence Tiers

1

Cognitive Control Plane

Brain Directive (IQ scalar), System-0/1/2 hierarchy, Execution Contracts — <5ms to route any query

2

Memory & Context

Working memory blackboard, 27-query context assembly, Brain Context Mesh across all 6 decision points

3

Service Fabric

SE-aaS (27 domains), AaaS (11 agents), PM-aaS (7 domains) — each backed by RL feedback loop

4

Self-Evolution

DreamCoder tool synthesis, RLVR calibration, Strategy Bandit, Federated Causal Learning

5 Bridges

Bridges connect the 5 planes through the event bus, creating a self-reinforcing intelligence loop.

Signal BridgeL1 IngestionL5 Constraint Verification

Collects signals from 16+ connectors, validates constraints before any execution

Learning BridgeL5 VerificationL25 Workspace Brain

Verified outcomes strengthen federated knowledge edges — dopamine/gaba signals

Context BridgeL25 Workspace BrainL6 Domain Agents

Brain Context Mesh wires 27-query context assembly at all 6 service decision points

RL Outcome BridgeL6 Domain AgentsL26 Process Intelligence

Every task outcome recorded — quality signals drive FSM planning decisions

Living Model BridgeL36 Local LLML2 Cognitive Engine

Trained Local LLM adapter weights injected into Brain IQ routing decisions

Event Bus

The event bus is the spine of the system. It provides:

  • Lamport clocks for ordered event sequencing across distributed workers
  • Deduplication to prevent duplicate signal processing
  • Priority queues so high-severity events (anomalies, cascades) process first
  • Backpressure to prevent overwhelming downstream consumers during burst ingestion
  • Pub/sub for decoupled communication between all 36 layers

Workspace Brain vs AI Worker Brain

BrainOS separates the Workspace Brain (L25–L29 — shared intelligence across all workers in an org) from AI Worker Brains (per-worker memory, conversation history, and RL history).

  • Workspace Brain (L25–L29) — federated knowledge, process intelligence, service signals shared across all workers
  • AI Worker Brain — per-worker conversation history, task memory, and individual RL outcomes
  • Living Model (L36) — Local LLM fine-tuned on this workspace's outcomes via LoRA — the OS trains its own LLM
  • Core Brain — anonymous cross-org patterns federated to all workspaces (no raw data crosses org boundaries)