Appendix G: Full Chapter Expansions
This appendix provides deep expansions on topics introduced in the main chapters — the "go further" material for practitioners who want to understand the why behind the patterns. It covers AI model internals (context windows, token budgets, failure modes), a complete workspace setup walkthrough from scratch, prompt engineering for automation specifically, integration recipes for Google Sheets/Notion/GitHub/Slack, a complete monitoring and observability stack, local LLM integration with Ollama, developer workflow automation, and community-discovered patterns that don't appear in official documentation.
The model understanding section is particularly valuable because it explains why automation sometimes behaves unexpectedly — token budget management at session start, context drift in long sessions, hallucination patterns, and the difference between "thinking" and "not thinking" model modes. Once you understand that a 100K token context window has ~13K tokens consumed before the agent does anything useful, you start designing context files differently.
The integration cookbook section provides complete working code for Google Sheets (CSV export + Apps Script API), Notion (database queries and page creation), GitHub (PR status via API), and Slack (message sending and channel reading). Each integration shows both the Python code and the cron task configuration to wire it into a scheduled automation. The monitoring section extends this into a complete four-layer stack: application logging, cron execution tracking, health checks, and alerting — with full Python implementations for metrics collection and dashboard generation.
The community patterns section is unique to this appendix — "Dead Man's Switch" for safety checks, "Shadow Run" for pre-deployment validation, "Chaos Engineering" for verifying safety systems actually work, "Context Summary on Demand" for long-project status, and "Agent Observation" for real-time progress tracking. These patterns emerged from real use and are not documented anywhere official.
Key Items
- **Context Window Budget** — Session start consumes ~13K tokens before any useful work (system prompt 2K, SOUL 500, MEMORY 1.5K, AGENTS 3K, USER 300, today's memory 800, conversation history 5K); practical implication: keep context files concise and run memory compaction regularly
- **Prompt Engineering Anatomy** — Complete automation prompts have six parts: trigger context, inputs (with explicit paths), numbered processing steps, outputs (exact file paths), error handling (specific failure modes), and completion signal; explicit prompts produce reliable results, vague prompts produce inconsistent ones
- **Integration Cookbook** — Complete implementations for Google Sheets (CSV export + Apps Script API), Notion (database query + page creation), GitHub (PR status API), Slack (WebClient for sending and reading); each includes Python code + cron task configuration
- **Monitoring Stack (4 layers)** — Application logging (structured log format with timestamps), cron execution tracking (history log with status), health checks (disk/memory/service endpoints), alerting (threshold-based WhatsApp); complete Python implementations for metrics collection and dashboard generation
- **Shadow Run Pattern** — Run new automations with SHADOW_MODE env var that does all work but skips actual sends/external changes; review shadow output log before deploying for real; prevents accidental spam and unintended external actions during development
Related Concepts
- [[kelly-handbook-ch2-architecture]] for system architecture overview
- [[kelly-handbook-ch6-scheduling-and-cron]] for cron scheduling deep dive
- [[kelly-handbook-ch4-web-automation]] for browser automation and web scraping
- [[karpathy-llm-wiki]] for the LLM wiki pattern this KB follows