What is CIR?
What is CIR?
Every AI system today either has no memory or bolts on memory as an afterthought — RAG chunks, vector similarity, chat history that scrolls off the context window. These systems store things and help you find them. None of them can reason about what they contain. They can't tell you that two pieces of evidence contradict each other, or that your confidence in a claim should have changed last Thursday when the supporting evidence was superseded.
Memory is a function. CIR is a substrate.
Memory is something you do — put things in, get things out. A substrate is something you build on. The difference matters: a filing cabinet stores files, a nervous system structures, weighs, connects, and explains. One is passive retrieval. The other is the infrastructure through which intelligence operates.
CIR — Cognitive Infrastructure Retrieval — is the substrate at the heart of mentu.
What makes CIR different
Three things separate CIR from memory systems, knowledge graphs, and RAG pipelines.
1. Evidence, not conclusions
CIR maintains a constitutional separation between what happened and what the system infers. Three layers of record, each with a distinct role:
Layer 1: Raw signal — Immutable, append-only. The evidence. What was actually observed, measured, or captured. A raw signal is never modified after creation. It is the ground truth.
Layer 2: Semantic interpretation — Versionable, re-computable. The system's understanding of what the evidence means. Interpretations can be revised as new evidence arrives without destroying the original observation.
Layer 3: Mechanics state — Derived, dynamic. Trust scores, contradiction status, salience rankings. These are the substrate's judgments about the evidence — they change as the graph of knowledge evolves.
This matters because if you collapse these layers — which every other system does — the system cannot distinguish what it observed from what it believes. When the world changes, it can't re-evaluate. It has already overwritten the evidence with the conclusion.
2. Confidence that changes
Every signal in CIR carries not one confidence score but three:
Asserted confidence — Frozen at creation. The confidence the source claimed when the signal was captured. A direct measurement might assert 0.95. An inference from indirect evidence might assert 0.6. This value never changes.
Effective confidence — Computed by trust propagation through the evidence graph. When supporting signals are weakened or contradicted, effective confidence drops automatically. When they're strengthened, it rises.
Current confidence — Effective confidence multiplied by time decay, computed at read time, never stored. Knowledge ages. An observation from last week carries more weight than one from six months ago.
Every confidence change is recorded in a trust event log with its cause — the specific signal that triggered the adjustment. The system can always explain why it believes what it believes.
3. Contradictions are features
When two signals conflict — one says the API is healthy, another says it is down — CIR does not silently pick one. It surfaces the contradiction as a first-class object. The contradiction persists until resolved by new evidence or human judgment.
Contradictions are not errors. They are evidence that the world is more complex than a single observation suggests. A system that hides contradictions is a system that lies to you about the consistency of its own knowledge.
The scientific record analogy
Science doesn't "remember." It records observations, builds theories from evidence, detects when new findings contradict existing theories, revises understanding, and maintains provenance. Papers don't disappear when they're superseded — they remain in the record, cited, extended, or contradicted by what comes after.
CIR works the same way. Signals are observations. Relations are citations. Confidence propagates through the citation graph. Patterns crystallize when the same structure recurs. When you query CIR, you don't get "the answer" — you get evidence, weighted by trust, positioned in time.
The read-before-act invariant
Before mentu takes any action, it must consult its own substrate. Hard engineering constraint, not a suggestion. Even if the result set is empty, the query must happen.
This is what transforms CIR from a database into an active substrate. Every decision is shaped by accumulated evidence. Intelligence is not stateless function execution — it is reasoning grounded in what has come before.
Current scale
CIR is running in production:
- ~245,000 signals across 47+ domains
- ~3.9 million relations connecting signals into an evidence graph
- ~113,000 vector embeddings for semantic search
- ~2,800 detected patterns
- Active contradiction monitoring across the entire corpus
Where to go next
- CIR Substrate — Technical deep-dive into the five operational layers, signal schema, relation types, and query API
- Architecture — How CIR fits into the mentu system alongside the CLI, daemon, and cloud API
- Writing Your First Script — Start capturing signals and querying CIR from your own TypeScript scripts