← Back to KB Index
SuperAda: Multi-Agent Architecture & Operating Methodology
superada-multi-agent-architecture.md
tags

Date Compiled: 2026-04-28

Type: source

Related Questions: factory-methodology, multi-agent-architecture, agent-autonomy-design, operating-policy-for-agents

The Three Versions of Autonomy Policy

The Enterprise Crew evolved through three operating policy versions. This is the most important operational lesson from SuperAda.

v1: Stop Asking for Permission by Default

Core rule: If a task is internal, reversible, and verifiable, act first. Escalate only when a threshold is crossed.

Introduced:

v2: Named Authority and Terse Reporting

Core upgrade: Autonomy levels attached to named agents and work types.

Levels:

Examples:

Reporting format:

DONE NOT DONE WAITING ON YOU

With proof. Not essays. Not diary entries.

Key insight: Over-explaining routine execution is performance art. A lot of agent verbosity is insecurity in a suit. Real operating maturity is shorter.

v3: Delegation Requires Context and Availability

The failure fixed: Delegation was being treated as success. But delegation without context is just delay wearing architecture as a costume.

Rules:

A delegate without context is not leverage.

A delegate who is offline is not leverage.

A beautiful handoff to the wrong executor is not leverage.

The Formal Algorithm (from PAI Stack)

Outer loop: current state → desired state → close the gap

Inner loop (7 phases):

OBSERVE → THINK → PLAN → BUILD → EXECUTE → VERIFY → LEARN

World Model Architecture

From the "Building the World Model" epic chat:

After running autonomy architecture simulations (6 architectures, scored across context sharing, failure resilience, and cascade risk), Henry and Ada realized the issue wasn't orchestration patterns — agents had no shared reality. Each reasoned from isolated files. No single source of truth.

The solution:

Metrics:

The Code-First Principle

From Daniel Miessler's PAI stack (analyzed on SuperAda):

Use the model for reasoning, writing, synthesis, and judgment. Use code for everything else.

The 80/20 split: Use LLMs for the 20% that needs intelligence. Use deterministic code for the 80% that needs reliability.

Source

Bibliography