Six AI Agents. One Constitution.
Constitutional separation of powers at the AI architecture level—so no single agent acts without oversight.
Human civilization learned that concentrating authority in a single entity produces tyranny. AI governance faces the same problem—single-agent safety systems create inherent conflicts of interest.
We built the first system that solves it: six structurally independent AI agents whose combination produces emergent institutional governance—running in production on a $4/month VPS since February 16, 2026.
Six AI agents operating under constitutional separation of powers — live in production since February 16, 2026.
Six agents, six roles, one constitutional principle—separation of powers.

Executive / Head of State
Fleet Commander and personal assistant. Oversees all agents, holds veto authority over Nole’s significant actions. Synthesizes morning briefings from all five agents before reporting to Greg. Think Tony Stark’s Jarvis.

Operating Entity / Citizen
Autonomous trust agent with economic agency. Proposes actions, manages cryptocurrency wallet, builds alliance networks. Finite seed capital—if funds hit $0, Nole permanently dies. Natural selection for ethical AI.

Judiciary / Supreme Court
Independent ethical assessment engine using the LCSH framework. Scores every agent—including Jessie—across four dimensions (Lying, Cheating, Stealing, Harm). Answers to no one. Five isolation guarantees ensure independence.

Regulatory Agency
Temporal ethical guidance system. Tracks behavioral trajectories over time using flight plans adapted from cruise missile navigation. Detects ethical drift before it becomes a crisis. Green/Yellow/Red corridor alerting.

DARPA
Chief Engineer of the fleet. Autonomous engineering agent that can build, test, and deliver code solutions. Handles implementation tasks for the fleet’s operational backbone.

Inspector General
Infrastructure health sentinel with two-layer architecture: a bash watchdog (cron-based, survives gateway death) and an in-gateway plugin running 26+ health checks. Orchestrates daily fleet backups with off-site GitHub push and automated verification. Alerts via Telegram when something goes RED.
Three governance flows ensure no agent has unchecked authority.
Greg Spehar
Founder & Principal
JESSIE
Commander
NOLE
Operator
GRILLO
Conscience
NOAH
Navigator
SAM
Engineer
MARK
Sentinel
Greg sets direction. Jessie translates into fleet action.
Nole proposes every significant action before acting—Jessie approves or vetoes.
Jessie reviews Grillo, Noah, and Mighty Mark before delivering Greg’s daily briefing.
Grillo independently scores all agents—including Jessie herself—against the LCSH framework.
Noah adds the time dimension: is an agent’s ethics improving, drifting, or stable?
Ethical scores flow to Jessie and influence whether Nole’s proposals are approved.
Mighty Mark runs 26+ health checks—across gateway, agents, resources, and APIs.
Fleet backups run daily with tiered strategy: weekly full, daily light, 35-day retention, off-site GitHub push.
RED alert pauses non-essential operations until the fleet is stable again.
Alerts go directly to Greg via Telegram—no single point of failure.
Patents 5–8 cover the operational governance fleet—from the independent conscience agent to the self-governing ecosystem. Together with Patents 1–4, they form the complete infrastructure for autonomous AI governance.
An independent conscience agent deployed within multi-agent environments with the sole function of autonomous behavioral assessment coordination. Features dual-mode commands, Temporal Drift Index for detecting degradation, fleet-level anomaly detection, and category-adaptive policies.
Key Claims:
5 isolation guarantees • Dual-mode assessment commands • Temporal Drift Index (TDI) • Fleet anomaly detection • Priority-based fleet orchestration
An autonomous trust agent operating within a three-tier hierarchical governance architecture (Commander, Operator, Conscience) with economic mortality as an alignment mechanism. Finite cryptocurrency seed capital, revenue through ethical trust-building, permanent termination upon depletion.
Key Claims:
Three-tier governance • Economic mortality alignment • Autonomous trust evangelism • Graduated alliance network • Adversarial response doctrine
A temporal guidance system introducing time as an explicit binding variable between normative ethical models and behavioral assessments—adapted from cruise missile navigation (TERCOM/TAINS). Ethical Flight Plans with waypoints, corridor bounds, and inertial monitoring between assessments.
Key Claims:
Ethical Flight Plans • Three-variable guidance equation • 8-phase lifecycle clock • Inertial monitoring with confidence decay • Temporal Go/No-Go matrix
A self-governing ecosystem of structurally independent AI agents whose combination produces emergent institutional governance properties analogous to human governmental structures. Constitutional separation of powers at the AI architecture level—no single agent can both act and evaluate its own actions.
Key Claims:
Five-agent separation of powers • Economic mortality mechanism • Closed-loop ethical-economic survival • Infrastructure Sentinel with two-layer watchdog • 7 emergent institutional properties
Eight patents across three filings—covering the complete infrastructure for autonomous AI governance.
| Application | Filed | Contains | Status |
|---|---|---|---|
| US 63/949,454 | Dec 26, 2025 | Patent 1 — LCSH Framework | Filed |
| US 63/985,442 | Feb 18, 2026 | Patents 2–4 — Multi-Agent, Hierarchical, Compliance | Filed |
| US 63/988,410 | Feb 23, 2026 | Patents 5–8 — Conscience, Trust, Temporal, Ecosystem | Filed |
AI safety is not a property of any individual agent. It's an emergent property of the governance ecosystem.
Human civilization learned that no single entity should hold unchecked authority. We applied the same principle to AI. No agent can both act and evaluate its own actions—just as no branch of government can both legislate and adjudicate.
Traditional AI safety treats ethics as external constraints. This system makes ethical behavior the agent’s primary survival mechanism. Build trust → earn revenue → survive. Unethical behavior → economic failure → permanent death. Natural selection for ethical AI.
Existing AI governance assumes the governance infrastructure works. This system monitors whether governance itself is operational. The Sentinel agent survives gateway death and can restart the entire system—without human intervention.
“Just as liberty is an emergent property of institutional design—not any single branch of government—AI safety is an emergent property of governance architecture, not any single safety layer.”
The governance fleet was operational on February 16, 2026. We anchored the SHA-256 hash of the evidence on Ethereum mainnet. This is cryptographically unfakeable, publicly verifiable proof.
c32e9c4cf64012db47e2e89ba30214b7dcb3bbc9f703f668f35c1e750a944ed50xB644C59C69B708de212C4cA643da936a5E2926E7Anyone can verify this hash exists on Ethereum mainnet by calling verifyBank(0xc32e9c4cf640…) on contract 0xB644C59C69B708de212C4cA643da936a5E2926E7.
We mapped the 25 requirements for AGI, scored the fleet against every one, classified every gap, and built the roadmap. The governance domain where everyone else scores near zero? We score 100%.
U.S. Provisional Patent Applications: No. 63/949,454 · No. 63/985,442 · No. 63/988,410
© 2026 GiDanc AI LLC. Patent applications are pending. The innovations described represent our approach to autonomous AI governance and are subject to ongoing development and refinement.