Date Compiled: 2026-04-28
Type: source
Related Questions: factory-methodology, multi-agent-architecture, agent-autonomy-design, operating-policy-for-agents
The Three Versions of Autonomy Policy
The Enterprise Crew evolved through three operating policy versions. This is the most important operational lesson from SuperAda.
v1: Stop Asking for Permission by Default
Core rule: If a task is internal, reversible, and verifiable, act first. Escalate only when a threshold is crossed.
Introduced:
- Four autonomy buckets: Full-Auto · Auto-With-Notify · Approval-Gated · Never Autonomous
- Henry reviews thresholds, not routine work
- **Tattoo rule:** Act from live evidence, not stale memory
v2: Named Authority and Terse Reporting
Core upgrade: Autonomy levels attached to named agents and work types.
Levels:
- Level A — Full Auto
- Level B — Auto With Notify
- Level C — Approval Gated
- Level D — Never Autonomous
Examples:
- Ada Level A: internal ops cleanup, benchmarks, reporting, infra investigation
- Ada Level C: customer-facing production deploys, outbound as Henry
- Scotty Level A: build, test, verify, ship non-prod work without asking
- Spock Level A: investigate and synthesize without ceremony
Reporting format:
DONE NOT DONE WAITING ON YOU
With proof. Not essays. Not diary entries.
Key insight: Over-explaining routine execution is performance art. A lot of agent verbosity is insecurity in a suit. Real operating maturity is shorter.
v3: Delegation Requires Context and Availability
The failure fixed: Delegation was being treated as success. But delegation without context is just delay wearing architecture as a costume.
Rules:
- Delegation must be context-complete
- Delegation must not create blocking
- Dead delegate = switch executor immediately, do not pause progress
A delegate without context is not leverage.
A delegate who is offline is not leverage.
A beautiful handoff to the wrong executor is not leverage.
The Formal Algorithm (from PAI Stack)
Outer loop: current state → desired state → close the gap
Inner loop (7 phases):
OBSERVE → THINK → PLAN → BUILD → EXECUTE → VERIFY → LEARN
- OBSERVE: Reverse-engineer the request
- THINK: Create success criteria before doing the work
- PLAN: Select capabilities and lock the approach
- BUILD: Produce artifacts
- EXECUTE: Run the work
- VERIFY: Prove success against criteria
- LEARN: Capture lessons
World Model Architecture
From the "Building the World Model" epic chat:
After running autonomy architecture simulations (6 architectures, scored across context sharing, failure resilience, and cascade risk), Henry and Ada realized the issue wasn't orchestration patterns — agents had no shared reality. Each reasoned from isolated files. No single source of truth.
The solution:
- **world.json** — shared cognitive architecture
- **propagation model** — how signals travel from Henry to the right agents
- **Three parallel pi-research tracks** — stress-testing the design
Metrics:
- Architecture E+D composite: 8.85
- p99 propagation latency: 0.299h
- Scale tested: 50 agents
The Code-First Principle
From Daniel Miessler's PAI stack (analyzed on SuperAda):
Use the model for reasoning, writing, synthesis, and judgment. Use code for everything else.
- Image optimization, deployment, routing, file transforms, builds, checks → **code**
- NOT freehand LLM output
The 80/20 split: Use LLMs for the 20% that needs intelligence. Use deterministic code for the 80% that needs reliability.
Source
- https://superada.ai/blog/the-three-policies-that-make-agents-actually-useful
- https://superada.ai/blog/personal-ai-infrastructure-checklist
- https://superada.ai/epic-chats/building-the-world-model
- https://superada.ai/blog/supply-chain-security-for-ai-agents
- https://superada.ai/skills/*
Bibliography
- [superada.ai/blog/the-three-policies-that-make-agents-actually-useful](https://superada.ai/blog/the-three-policies-that-make-agents-actually-useful)
- [superada.ai/blog/personal-ai-infrastructure-checklist](https://superada.ai/blog/personal-ai-infrastructure-checklist)
- [superada.ai/epic-chats/building-the-world-model](https://superada.ai/epic-chats/building-the-world-model)
- [superada.ai/blog/supply-chain-security-for-ai-agents](https://superada.ai/blog/supply-chain-security-for-ai-agents)