synthesizing-institutional-knowledge
Builds the organizational memory schema your AI agent needs to answer why — capturing decision provenance, causal chains, and event context that embedding-based retrieval permanently discards.
by Jeremy Banning
About This Skill
What This Skill Does
When you embed a document, you preserve what it says. You lose who decided it, why, what it replaced, and what it caused. This skill teaches you to capture that missing provenance as structured institutional memory — so your agent can answer questions that no RAG system can touch.
Problems It Solves
Provenance blindness — "Why are we doing it this way?" is unanswerable from a vector store because the reasoning was never indexed, only the output document.
Type 3 knowledge gap — most organizations capture facts (Type 1) and some events (Type 2), but almost never capture causal reasoning (Type 3) at the time decisions are made. This skill closes that gap before it compounds.
Retroactive ingestion failure — teams trying to rebuild institutional history from old docs discover the causal edges were never written down. This skill provides a model-assisted extraction workflow with human review for causal edge validation.
"Why do we use X?" queries — technology, policy, and architectural choices require graph traversal over decision chains, not semantic similarity.
What You Get
The skill defines three knowledge types with distinct storage targets:
Declarative (Type 1): Facts and current-state policies → Vector RAG. The only category where embeddings are structurally sufficient.
Episodic (Type 2): Events, incidents, decisions with timestamps → Temporal store with full event schema.
Causal (Type 3): Decision rationale, constraint chains, alternatives considered → Knowledge graph with explicit causal predecessor/successor edges.
You also get a complete institutional event schema — a JSON structure capturing actors, affected entities, rationale, alternatives considered, constraints, outcome, and causal links — plus an ingestion workflow for both live capture and retroactive extraction from legacy documents like ADRs, post-mortems, and meeting notes.
Who Should Use This
Teams building AI agents that must answer questions about organizational reasoning — why decisions were made, how the current architecture evolved, what historical constraints drive current policy — across engineering, compliance, strategy, or any domain where institutional memory compounds over time.
Use Cases
- Engineering knowledge base: An agent over ADRs, design docs, and incident reports can answer "Why did we migrate off the monolith?" by traversing causal predecessor chains — not just finding the migration doc.
- Compliance and audit agent: "What constraints drove the current data retention policy?" requires causal context from the regulatory event that preceded it, not just the policy text.
- Onboarding acceleration: New engineers ask "Why is this system built this way?" The institutional event graph answers with the full decision chain — alternatives considered, constraints, and outcome — rather than returning the design doc with no context.
- Post-mortem reconstruction: "What sequence of decisions led to this incident?" is an episodic + causal query over timestamped events with explicit predecessor links.
- Strategic context for AI advisors: Agents assisting leadership on current strategy need to know what was decided before, why it was decided, and what it caused — not just what current policy says.
$10
One-time purchase • Own forever
Security Scanned
Passed automated security review
8/8 checks passed
Tags
Best with Claude Code 1.2+. No external dependencies required — the event schema is storage-agnostic and maps directly to Neo4j, TimescaleDB, or any graph/time-series backend. Designed to work alongside designing-hybrid-context-layers (architecture) and diagnosing-rag-failure-modes (failure diagnosis).
Creator
Jeremy Banning
Over 20 years of experience in data exploration and digital signal processing working across a variety of sectors including fintech, aerospace, and defense. Expertise in Risk Analysis, Engine Health Monitoring and predictive maintenance efforts for one of the world’s leading jet engine manufacturers developing machine learning models and helping organizations achieve real impact from their analytics initiatives. Passionate about Agentic workflows, the Enterprise Context Layer, and Information Synthesis. Specializing in Enterprise AI.
Learn More About AI Agent Skills
Similar Skills
designing-hybrid-context-layers
Architects the right retrieval strategy for every query — teaching your agent when to use RAG, a knowledge graph, or a temporal index instead of defaulting to vector search for everything.
diagnosing-rag-failure-modes
RAG fails quietly. It retrieves documents, returns confident-looking answers, and misses the question entirely — because the question required connecting facts across documents, reasoning about sequence, or tracing causation. This skill gives you a five-question diagnostic checklist that classifies any failing query as either RAG-safe or structurally RAG-incompatible, then maps it to the specific failure pattern and the architectural fix that resolves it.
code-reviewer
Reviews your code for bugs, security vulnerabilities, logic errors, performance issues, and style violations. Organizes findings by severity and suggests fixes with code examples.
git-commit-writer
Writes conventional commit messages by analyzing your staged git changes. Detects commit type, scope, and breaking changes automatically.