Most AI productivity advice focuses on prompts: write clearer instructions, add more context, use system prompts. That advice misses the core issue. The 10 minutes you spend re-explaining your project context at the start of every session is not a prompting problem. It's an infrastructure problem, and no amount of prompt engineering fixes it.
In a previous article I argued that structured knowledge is the real competitive layer for AI tools. This article is the practical follow-up: what the vault looks like, how sessions start, how agents consume it, and what keeps the system honest over time. A working system, used daily across multiple concurrent projects.
The Vault Skeleton
The foundation is PARA, a framework that divides all knowledge into four categories based on actionability. Each category maps to a top-level folder:
Why this matters for AI agents: predictability. When an agent reads from 1-Projects/, it knows it's looking at active, scoped work. When it reads from 3-Resources/Knowledge Base/Standards/, it knows it's looking at stable reference material that applies across projects. The folder name carries semantic weight. The agent doesn't need to open a file to understand its role in the system.
Each project folder follows the same internal pattern: a hub document at the root, then subfolders for Architecture, Research, Strategy, and Assets. An agent entering any project folder for the first time already knows where to look, because the layout is identical everywhere.
Frontmatter: The Machine-Readable Layer
Every Markdown file in the vault carries YAML frontmatter at the top. This is what makes the vault queryable rather than just browsable.
Three examples from actual use:
The status field carries specific meaning. living means the document is actively maintained and will change. locked means it's a stable reference, unlikely to need updates. archived means superseded, kept for history but flagged with a warning callout. draft means work in progress. An agent filtering for current project context can select on status: living without reading every file in the vault.
Hub Documents: Where Agents Start
Every project and area has a hub document. This is the single entry point an AI agent reads to understand the full scope of a project.
The anatomy is consistent:
Overview in two to three sentences: what this project is, current phase, technology stack, key constraint. No narrative padding — just enough for an agent to understand scope without reading deeper.
Key Documents grouped by category, each as a wikilink. Architecture documents, product specs, implementation guides, decision logs. The hub is a map, not a summary. An agent follows the links it needs and ignores the rest.
Status as a checkbox list showing completed phases and current work. This gives an agent immediate signal about where the project stands without reading deeper.
Related section at the bottom linking to parent areas, sibling projects, and relevant Knowledge Base entries.
The linking conventions are deliberate. Hub documents link down to their children; content documents link back up to their hub and sideways to related docs. Cross-project links always go through the Knowledge Base or PARA index documents, never directly between project files. This keeps the link graph navigable and prevents tangled references that confuse both humans and agents.
The Persistent Memory Core
The single most impactful file in the system is CLAUDE.md. It lives in .claude/ and gets loaded automatically at the start of every Claude Code and Claude Cowork session. The agent reads it before you type your first message.
What goes in it: your identity and working style (how you think, what you expect from outputs, what frustrates you), your active projects with current phase and technology stack, your architectural constraints and non-negotiable decisions, your communication preferences (structured markdown, MECE frameworks, no fluff), and the key decision history that should persist across sessions.
This is not a biography. It is a machine-readable operating manual. When the memory core states that your platform uses Supabase with RLS policies and a two-instance database model, the agent will not suggest Firebase. When it states that you expect MECE decomposition and explicit trade-offs, the agent shapes its output accordingly from the first interaction.
Practical starting point: describe your top three to five active projects in 10-15 lines each. Cover purpose, current phase, stack, and key constraints. Add a section on your communication preferences. That gives you roughly 300-500 lines of persistent context, enough to transform every session from cold start to warm handshake.
Review it monthly. Projects evolve, constraints shift, decisions accumulate. The memory core is a living document.
The Knowledge Base: Write Once, Reference Everywhere
3-Resources/Knowledge Base/ is the cross-project reference library. It holds stable, reusable knowledge organized by domain: Enterprise Architecture foundations, regulatory standards (NIS2, EU AI Act, GDPR, ISO 25010), assessment frameworks, tooling comparisons, methodology guides, and certification study material.
Each entry carries status: locked in its frontmatter. These documents are not project-specific. They represent accumulated expertise that any project can reference through a wikilink from its hub document. When an agent working on a public sector engagement needs to understand NIS2 compliance, it follows the link to the Knowledge Base entry. The knowledge exists once and gets applied everywhere it's needed.
This eliminates a pattern that wastes significant time: re-explaining the same framework, regulation, or methodology in every new session because the agent has no persistent access to reference material. With the Knowledge Base in the vault, that explanation exists as a structured document the agent can read on demand.
The Knowledge Base also contains methodology files that govern output quality: a detailed writing style analysis, an AI writing pitfalls reference covering em-dash overuse, hedging language, three-item disease, and other patterns that make AI-generated text identifiable. These are locked documents that any content-producing workflow can reference.
The AI Infrastructure Layer
This is where the vault goes beyond a knowledge system. 3-Resources/AI/ contains not just knowledge about AI agents but their actual operating definitions, versioned alongside everything else.
Four subdirectories, each with a distinct function:
Claude-Skills/ holds reusable skill definitions. Each skill is a folder with a SKILL.md file that gets injected into agent context when referenced. Skills are the composable building blocks: architecture-constraint-enforcer, mece-scenario-architect, trade-off-comparator, executive-brief-writer, risk-dependency-mapper, and others. Some are generic and apply across any project. Others are workflow-specific templates for business requirements documents, functional specifications, or test matrices. The skills folder is symlinked to ~/.claude/skills/, which means every skill defined in the vault is available in every Claude Code session automatically.
Claude-Agents/ holds subagent definitions for a multi-agent product development pipeline. Seven agents, each with a defined role: product requirements analyst, functional architect, technical spec writer, architecture guardian, test engineer, QA reviewer, and developer generator. Each agent reads from the persistent memory core, uses specific skills, and produces artifacts at defined output paths. The agents are coordinated through a phased workflow with explicit gating: no phase proceeds until the previous phase passes review.
The vault documents the full dependency graph between agents and skills:
Claude-Config/ holds templates for project-level setup: a CLAUDE.md template, a settings template, a symlink creation script. When starting a new project, you copy the templates, fill in project-specific details, and the multi-agent infrastructure is ready. Configuration is infrastructure, not improvisation.
Runtime-Patterns/ documents production agent patterns that differ from the build-time subagents. Build-time agents help you write code during development. Runtime agents are the deployed code, running as stateful graphs in production with checkpointing, human-in-the-loop gates, and observability. Different lifecycle, different framework. The vault keeps both documented in the same navigable structure so you can reason about the full agent stack in one place.
This is the part most people miss. The vault is not a knowledge system that AI tools happen to read. It is the agent infrastructure layer. Skill definitions, agent configurations, pipeline orchestration, and runtime patterns live alongside project context and reference material, a single source of truth for both what you know and how your AI tools operate.
How a Session Actually Starts
Open Obsidian. The Dashboard surfaces active projects through dataview queries (deadlines, priorities, current status) so you get a five-second orientation before any AI session begins.
Start a Claude Cowork or Claude Code session. The agent reads CLAUDE.md automatically, which means full context is loaded before you type anything: who you are, what you're working on, what constraints apply, which decisions are locked.
Point the agent at a specific project. It reads the hub document, follows links to the relevant architecture documents and decision logs, and starts working with project-specific context from the first interaction. No preamble, no "let me explain the project structure" ritual.
During the session, the vault is both input and output. Decisions go into the decision log, research outputs land in the project's Research subfolder, generated artifacts go into their documented location. The agent knows where things belong because the folder conventions tell it.
After the session, update the hub document status and link any new artifacts. The next session starts richer than this one ended, because every captured decision and linked artifact adds to the context the agent will load next time.
Keeping the Vault Honest
What most people expect: build the system once, maintain it forever through discipline alone. What actually happens: things break down. Frontmatter drifts when you create files in a hurry and skip the tags. Documents accumulate without links from any hub. The persistent memory core gets stale as projects evolve faster than you update it. Subfolders multiply beyond useful depth. Knowledge Base entries persist that nothing references anymore.
The countermeasure is a scheduled weekly audit that runs automatically in Claude Cowork. The audit task checks eight categories:
Frontmatter validation scans every .md file for required fields (title, date, tags, status), verifies the status value matches the taxonomy, and flags malformed dates or missing metadata.
Wikilink integrity extracts all [[wikilinks]] from every file, cross-references against actual filenames in the vault, and reports broken links and orphaned documents. Files that exist but have no incoming links surface immediately.
Hub note coverage verifies that every project folder has a hub document with the required sections: overview, key documents, status, and related links. Missing or incomplete hubs get flagged.
Related section validation checks that every file ends with a ## Related section containing at least one wikilink. Files missing the section get flagged for review.
Cross-reference discovery goes beyond structural checks. It reads document content, identifies topical overlap based on shared concepts, related entities, and connected workflows, then surfaces missing links ranked by confidence. High-confidence suggestions are documents that discuss the same architecture pattern or decision but don't link to each other. Medium-confidence ones are implementation documents that should reference their source strategy.
Archived file checks verify that documents with status: archived carry a warning callout near the top referencing their replacement.
Structure consistency flags files in unexpected locations and orphaned binary assets that no Markdown file references.
The output is a vault-audit-YYYY-MM-DD.md report in the vault root with a compliance score: frontmatter percentage, broken link count, hub coverage, cross-reference density. The audit runs autonomously; you review the report weekly and fix issues incrementally.
Total maintenance investment: 15-20 minutes per day for capturing decisions and linking artifacts during normal work, plus a weekly audit review that takes 30 minutes. The audit catches what discipline misses.
What Changes
I covered the economic argument in the previous article: structured knowledge reduces re-explanation, improves first-attempt quality, and shifts AI from answer engine to working partner. This section is about something that only becomes visible once the system is running.
The multi-agent pipeline produces consistent, architecture-compliant artifacts because every agent reads from the same structured source of truth. A product requirements agent generates a BRD that the architecture guardian can validate against locked decisions, because both agents read from the same hub document and the same persistent memory core. That kind of cross-agent consistency doesn't happen through better prompting. It happens because the knowledge layer provides a shared operating surface.
The weekly audit catches structural drift before it degrades agent performance. Linking conventions keep the knowledge graph navigable as volume grows, and the frontmatter taxonomy lets you filter and scope agent context without manual curation.
The vault is not a note-taking system, it is infrastructure, and it rewards the same deliberate maintenance you'd give to any production system. The initial setup takes a weekend. What it enables keeps expanding from there.