The LLM Wiki Pattern — Karpathy's Compounding Knowledge Base
**Source tweet** (Apr 2, 2026 — @karpathy, 20.6M views, 57K likes, 104K bookmarks): "LLM Knowledge Bases — Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: **Data ingest:** I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally 'compile' a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. **IDE:** I use Obsidian as the IDE 'frontend' where I can view the raw data, the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. **Q&A:** Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. **Output:** Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up 'filing' the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always 'add up' in the knowledge base. **Linting:** I've run some LLM 'health checks' over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. **Extra tools:** I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. **Further explorations:** As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM 'know' the data in its weights instead of just context windows. **TLDR:** raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts." **Follow-up tweet** (Apr 4, 2026 — 6.8M views): "Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an 'idea file'. The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: [gist] You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc."
Core insight: Most people's experience with LLMs and documents is RAG: upload files, retrieve relevant chunks at query time, generate an answer. The LLM rediscovers knowledge from scratch on every question. Nothing accumulates. Ask a subtle question requiring synthesis across five documents and the LLM has to find and piece together fragments every time.
The LLM Wiki pattern inverts this. Instead of just retrieving from raw documents at query time, the LLM incrementally builds and maintains a persistent wiki — a structured, interlinked collection of markdown files that sits between you and the raw sources. When you add a new source, the LLM reads it, extracts key information, integrates it into the existing wiki, updates entity pages, revises topic summaries, notes where new data contradicts old claims. The knowledge is compiled once and then kept current, not re-derived on every query.
The Three-Layer Architecture
Raw sources are the curated collection of source documents — articles, papers, images, data files. Immutable. The LLM reads from them but never modifies them. Source of truth.
The wiki is the LLM-generated layer: summaries, entity pages, concept pages, comparisons, an index, a synthesis. The LLM owns this layer entirely — creates pages, updates when new sources arrive, maintains cross-references, keeps everything consistent. You read it; the LLM writes it.
The schema (e.g. CLAUDE.md, AGENTS.md, or in Kelly's case METHOD.md) tells the LLM how the wiki is structured, what conventions to follow, and what workflows to execute when ingesting sources, answering questions, or maintaining the wiki. This is the configuration that makes the LLM a disciplined wiki maintainer rather than a generic chatbot.
The Three Operations
Ingest. Drop a new source into the raw collection and tell the LLM to process it. The LLM reads the source, discusses key takeaways with you, writes a summary page, updates the index, updates relevant entity and concept pages across the wiki (a single source might touch 10–15 wiki pages), and appends an entry to the log. Best done one at a time with human oversight — read summaries, check updates, guide emphasis.
Query. Ask questions against the wiki. The LLM searches for relevant pages, reads them, synthesizes an answer with citations. Answers can take different forms: a markdown page, a comparison table, a slide deck (Marp), a chart (matplotlib), a canvas. Good answers can be filed back into the wiki as new pages — a comparison you asked for, an analysis, a connection you discovered. Explorations compound just like ingested sources.
Lint. Periodically health-check the wiki: contradictions between pages, stale claims superseded by newer sources, orphan pages with no inbound links, important concepts mentioned but lacking their own page, missing cross-references, data gaps that could be filled with web search. The LLM suggests new questions and new sources to investigate.
index.md and log.md
Two special files help navigate the growing wiki:
index.md is content-oriented — a catalog of every page with a link, one-line summary, and optional metadata (date, source count). Organized by category. The LLM updates it on every ingest. When answering a query, the LLM reads the index first to find relevant pages, then drills in. Effective at moderate scale (~100 sources, hundreds of pages) without embedding-based RAG infrastructure.
log.md is chronological — an append-only record of ingests, queries, and lint passes with consistent prefixes (e.g. ## [2026-04-02] ingest | Article Title). Parsable with simple unix tools: grep "^## \[" log.md | tail -5. Gives you a timeline of the wiki's evolution.
The Key Maintenance insight
"The tedious part of maintaining a knowledge base is not the reading or the thinking — it's the bookkeeping. Updating cross-references, keeping summaries current, noting when new data contradicts old claims, maintaining consistency across dozens of pages. **Humans abandon wikis because the maintenance burden grows faster than the value. LLMs don't get bored, don't forget to update a cross-reference, and can touch 15 files in one pass.** The wiki stays maintained because the cost of maintenance is near zero."
The human's job: curate sources, direct the analysis, ask good questions, think about what it all means.
The LLM's job: everything else — summarizing, cross-referencing, filing, bookkeeping.
Relation to Vannevar Bush's Memex
The pattern is related in spirit to Vannevar Bush's Memex (1945) — a personal, curated knowledge store with associative trails between documents. Bush's vision was private, actively curated, with connections between documents as valuable as the documents themselves. The part he couldn't solve was who does the maintenance. The LLM handles that.
This KB: Alignment with the LLM Wiki Pattern
dark-factory-kb is already an instance of this pattern:
| LLM Wiki Concept | dark-factory-kb Implementation |
|---|---|
| Raw sources | `raw/kelly/tweets/`, `raw/kelly/handbook/`, `raw/bmad/` |
| Wiki (compiled articles) | `compiled/sources/*.md` — LLM-generated summaries |
| Schema (AGENTS.md) | `AGENTS.md`, `METHOD.md`, `STAGES.md` — conventions and workflows |
| index.md | `compiled/sources/index.html` — catalog of all KB articles |
| log.md | `compiled/log.md` — append-only timeline of compiles and health checks |
| Ingest | Subagent compile passes (Kelly handbook chapters, tweet batches) |
| Lint | `gates.py` contradiction checks, completeness checks |
See also: [[kelly-research-transcripts]] for the original discussion of this KB's architecture; [[kelly-gas-town-gap-analysis]] for factory comparison; [[kelly-handbook-ch8-memory]] for memory system design in Kelly's own system.