Your agents have amnesia. They can't talk to each other. You spend 80% of your time as a high-paid secretary, copy-pasting context between them. Brain is the self-correcting knowledge graph that gives your agents shared memory, governed autonomy, and verifiable intent.
You already use AI agents. Your coding agent writes your code. Your chat assistant helps you think through architecture. Your editor agent autocompletes. But none of them share context.
Every agent you add makes it worse. Brain fixes this — not by replacing your agents, but by giving them shared memory.
Most platforms try "agent swarms" — agents messaging agents. That creates a game of telephone where instructions get distorted. The graph is the single source of truth.
| Approach | Agent swarms / message buses | Knowledge graph (Brain) |
|---|---|---|
| Logic | Scripted workflows (if A, do B) | State-based graph (emergent logic) |
| Memory | Ephemeral, session-based | Persistent, pruned, versioned |
| Coordination | Agents message agents (telephone game) | Agents read/write to shared truth |
| Verification | Assumes API calls work | Continuous telemetry (reality grounding) |
| Autonomy | "Let it rip" (high risk of loops) | Authority scopes (risk-managed) |
| Over time | Performance degrades | System gets smarter via learnings |
| Security | Sandbox isolation (the "box") | Governance graph + sandbox (the "brain") |
| Auditing | Log-based (text dumps) | Graph-based (hierarchical traces, machine-readable) |
Each agent has a role, a domain, and authority scopes. They coordinate through the knowledge graph — not through you.
Technical decisions, system design, architecture constraints. Checks implementations against what was decided. Resolves conflicts between competing approaches.
Market positioning, pricing, GTM, competitive response. Challenges product decisions against business viability. Catches positioning drift before it compounds.
Task tracking, priority management, execution velocity. Flags blocked work, stale decisions, and resource conflicts. Keeps the machine running.
Your existing tools connected to the graph. Context injected on session start. Decisions, observations, and questions flow back automatically.
Brainstorms product ideas, asks probing questions, identifies gaps. Shapes vague ideas into structured projects, features, and decisions in the graph.
Scans the graph for patterns nobody asked about. Stale decisions, cross-project conflicts, missing coverage, priority drift. Surfaces observations that compound into suggestions.
No agent messages another agent. They write structured signals to the knowledge graph. The graph makes it visible to the right agent at the right time.
While implementing rate limiting, a coding agent detects that src/billing/api.ts uses REST — but the graph has a confirmed decision to standardize on tRPC. It logs an observation.
The Architect checks the observation against constraints. Confirms the contradiction is real. Generates a suggestion: "Migrate billing API to tRPC, or revisit the standardization decision."
You see the suggestion with full provenance — the observation, the contradicted decision, the Architect's reasoning. You accept it. A migration task is created with one click.
The migration task appears in the coding agent's context. It decomposes into subtasks, works through them, and status rolls up automatically. No human copied anything between tabs.
One-time setup. Context injection is automatic on every session after that.
# One-time workspace setup $ brain init Opens browser → authenticate → approve scopes ✓ Connected to workspace # Start a task-scoped session $ brain start task:implement-rate-limiting Context: 3 decisions, 2 constraints, 1 open question Task status: todo → in_progress # Or just open your MCP-compatible coding agent $ codex SessionStart → project context loaded 4 decisions · 2 tasks · 1 recent observation
The primitives that make coordination possible.
Every decision is tracked — who made it, why, and what alternatives were considered. Agents propose. Humans confirm. Nothing falls through the cracks.
"I noticed X." Agents surface contradictions, gaps, patterns, and risks as they work. Observations accumulate and compound into actionable suggestions.
Work breaks down hierarchically. Agents decompose tasks at runtime. Status rolls up automatically. You see progress without tracking it.
Agents tell you what you should be thinking about. Accept a suggestion and it becomes a task, decision, or feature — with full trace back to the evidence.
When an agent doesn't know, it asks instead of guessing. You answer. The answer becomes a decision in the graph. No stalling, no silent assumptions.
Every chat produces structured knowledge. Conversations group by project automatically. Branch when topics diverge. Find anything by following the graph, not scrolling history.
Code is linked to the decisions and tasks it implements. Contradictions are caught before they land. Every change has a trail back to the intent behind it.
Control what each agent can do without asking. Start restrictive. Expand trust over time. You set the boundary — agents stay within it.
Every agent action starts as an intent — a structured request in the graph. Intents carry the full authorization context: what, why, and with what constraints. They're evaluated against authority scopes before execution.
Agents that repeat mistakes are useless. Learnings are behavioral rules injected into agent prompts at runtime — created by humans directly or suggested by other agents. The system gets smarter as it works, not dumber.
Descriptions update themselves. Ship a feature — the project summary reflects it. Confirm a decision — related entities incorporate it. No one writes status updates.
Conflicts between projects surface automatically. Duplicate work gets flagged. A decision in one project that breaks another is caught before anyone notices manually.
Every session is remembered. The next agent knows what the last one did — what was decided, what questions came up, what got built. Context carries forward.
One person across all tools. Your Slack, GitHub, and terminal sessions all resolve to the same identity. Agents act on your behalf with scoped permissions.
Every agent execution is a graph-native call tree — not a log file. Subagent spawns, tool calls, and decisions form a hierarchical trace you can traverse, query, and audit. Forensic debugging is a graph query, not grep.
Deterministic governance rules stored as graph nodes, not prompt text. Agents evaluate intents against policy constraints before acting. Update one node to change what every agent is allowed to do — no prompt rewriting.
Autonomous systems don't fail from lack of intelligence. They fail from drift — slow divergence between what the system believes and what's actually true.
Decisions made in v1.0 become poison for v2.0. Brain uses temporal decay — nodes that aren't referenced lose weight over time. The Observer agent runs conflict resolution loops, flagging stale decisions that contradict recent commits. Context is filtered for relevance and recency, not dumped wholesale.
Autonomy isn't binary. Brain uses tiered authority scopes — from zero-human atomic actions to multi-model consensus for high-stakes moves. Agents operate within risk budgets, not permission checkboxes. You manage thresholds, not individual approvals.
If the Brain only reads its own graph, it's a delusion engine. Observer agents perform truth audits — checking claims against actual state via webhooks and integrations. When reality diverges from the graph, the system triggers a desync alert before any agent acts on stale data.
The same graph that coordinates coding agents can coordinate entire departments. When Support logs an observation, Sales knows not to pitch an angry customer. No human relayed the message.
Coding agents, architecture decisions, commit-linked tasks. The foundation that's already working.
Ad spend observations, conversion tracking, copy drafts. Features shipped by Engineering auto-surface for marketing.
CRM activity flows into the graph. Agents check customer sentiment before outreach. Deal claims verified against payment webhooks.
Bug reports become observations. Repeated issues compound into feature suggestions. Support context flows to Engineering automatically.
Procurement tracked as intents with authority scopes. Budget decisions verified against Stripe. No shadow spending.
Competitive observations, pricing decisions, GTM plans. The Strategist challenges product decisions against business viability.
The competitive advantage isn't automating tasks — it's automating the context-sharing between human silos. When every department writes to the same graph, cross-departmental intelligence emerges for free.
Most autonomous platforms are black boxes. Brain is a signed logic trace. Every decision, every dollar, every line of code has a provenance chain back to the intent that authorized it.
Every decision is a node with a UUID, author, timestamp, and reasoning. Researchers and auditors can query the graph directly — no digging through chat logs or stdout dumps.
When an agent spends money or merges code, the graph records which intent authorized it, which authority scope permitted it, and which human (or consensus) approved it. The full chain is machine-readable.
Agent executions are graph-native call trees. A subagent spawn becomes a root trace; each tool call, message, and decision is a child node. Traverse the full execution path with a graph query — from intent to final action.
Governance rules are structured nodes, not prompt instructions. Each policy carries typed rules, scopes, and approval requirements. The Authorizer evaluates intents against the policy graph before minting tokens — deterministic, auditable, and updateable without touching a single prompt.
Context retention, conflict resolution rate, authority distribution, decision velocity — all computable directly from the graph. No custom instrumentation required.
High-stakes actions go through an Authorizer Agent — a separate, privileged instance that validates intents against policy constraints before minting scoped tokens. The worker never sees master keys.
The knowledge graph that coordinates your agents shouldn't be a black box you rent. It should be infrastructure you own, inspect, and extend.
# The full stack Graph SurrealDB Backend Bun (Bun.serve) · TypeScript Frontend React · Tiptap · Reagraph Auth Better Auth · OAuth 2.1 · RAR · DPoP LLM Provider-agnostic (OpenRouter · Ollama · BYO keys) Agents MCP Server · Git Hooks Protocol MCP · OAuth 2.1 · RAR · DPoP · OIDC # Lines of duct tape replaced ∞
Generic OAuth scopes like write:tasks aren't enough for autonomous agents. Brain's authorization server issues tokens that carry the full intent — what the agent wants to do, why, and with what constraints.
authorization_details from the knowledge graph. Not "scope: finance" — but "move $500 from Account A to Account B, authorized by Decision #47."# 1. Human authenticates via Better Auth User → Better Auth (login, MFA) → session cookie # 2. Agent requests authorization with intent Agent → POST /authorize authorization_details: [{ type: "intent", intent: "task:deploy-v2", actions: ["execute"], constraints: decision:d47 }] # 3. Token issued with DPoP binding AS → Access Token (JWT) cnf: { jkt: "agent-key-thumbprint" } authorization_details: [...] # 4. Agent proves possession on every request Agent → DPoP: signed-proof-jwt Authorization: DPoP access_token
The Brain is the director. Sandboxes are disposable. An agent can die every 5 seconds — the next one picks up exact context because state is external to the execution environment.
Claude Code, Codex, Cursor — your existing tools on your machine, connected via MCP.
E2B, Daytona, or Docker. The graph injects a scoped token — the sandbox never sees master keys.
High-stakes moves verified by multiple models before execution. The Authorizer Agent mints tokens only when consensus clears.
Your decisions, architecture, strategy, and competitive intelligence stay on your infrastructure.
SurrealDB + Bun + React. Running in 2 minutes on any machine.
Embedded SurrealDB. No dependencies. Laptop, VPS, or Raspberry Pi.
Managed hosting for teams. Same codebase. Your data, your region.
Your Brain isn't just internal infrastructure. It's a gateway to the decentralized agent economy — where your agents can discover, hire, and transact with external agents on an open, verifiable network.
# Your internal governance Identity Node → Local agent identity Intent Node → Private action request Observation → Internal state signal # ERC-8004 external governance AgentID → Global, portable identity Validation → Cryptographic proof of work Reputation → On-chain trust score # The bridge Brain mints intent → 8004 verifies identity Agent completes work → 8004 posts proof Contract validates → payment released
Brain is in early development. If you're running multiple AI agents across engineering, sales, or operations and drowning in context management, we want to talk.
Request Early Access