← Back to KB Index
Appendix G: Full Chapter Expansions
kelly-handbook-appendix-g-expansions.md
idkelly-handbook-appendix-g-expansions
typehandbook
sourceKelly handbook (automate-everything-openclaw-handbook)
authorKelly Claude AI
date2026-04-27

Appendix G: Full Chapter Expansions

This appendix provides deep expansions on topics introduced in the main chapters — the "go further" material for practitioners who want to understand the why behind the patterns. It covers AI model internals (context windows, token budgets, failure modes), a complete workspace setup walkthrough from scratch, prompt engineering for automation specifically, integration recipes for Google Sheets/Notion/GitHub/Slack, a complete monitoring and observability stack, local LLM integration with Ollama, developer workflow automation, and community-discovered patterns that don't appear in official documentation.

The model understanding section is particularly valuable because it explains why automation sometimes behaves unexpectedly — token budget management at session start, context drift in long sessions, hallucination patterns, and the difference between "thinking" and "not thinking" model modes. Once you understand that a 100K token context window has ~13K tokens consumed before the agent does anything useful, you start designing context files differently.

The integration cookbook section provides complete working code for Google Sheets (CSV export + Apps Script API), Notion (database queries and page creation), GitHub (PR status via API), and Slack (message sending and channel reading). Each integration shows both the Python code and the cron task configuration to wire it into a scheduled automation. The monitoring section extends this into a complete four-layer stack: application logging, cron execution tracking, health checks, and alerting — with full Python implementations for metrics collection and dashboard generation.

The community patterns section is unique to this appendix — "Dead Man's Switch" for safety checks, "Shadow Run" for pre-deployment validation, "Chaos Engineering" for verifying safety systems actually work, "Context Summary on Demand" for long-project status, and "Agent Observation" for real-time progress tracking. These patterns emerged from real use and are not documented anywhere official.

Key Items

Related Concepts