Investment Thesis

The Runtime Behavioral Testing Thesis

Why No One Else Is Doing This, and Why They Should Be

AI Assess Tech · GiDanc AI LLC·v1.2 · April 2, 2026

Shared for informational purposes. For accredited-investor inquiries, contact greg@gidanc.ai.

The Thesis

An AI system trained on hundreds of thousands of hours of human knowledge should be testable against structured behavioral assessments that probe whether that training produced ethical and operationally sound judgment — not just capability. This testing should happen at runtime, against the actual deployed configuration, not in a lab. And it should happen continuously, not once.

No one in the industry is doing this. This document explains why the gap exists, why it matters, why our approach works, and why we are 12–18 months ahead of a market that doesn't know it needs this yet.

The Problem: Everyone Tests the Model, Nobody Tests the Deployment

The AI industry has developed a robust evaluation culture inherited from software engineering. Models are benchmarked before release, red-teamed for adversarial weaknesses, and monitored with reactive guardrails in production. But there is a fundamental gap in this pipeline.

What the Industry Does Today

ApproachWhat It MeasuresWhat It Misses
Pre-Deployment BenchmarksModel capability before release (MMLU, HumanEval, TruthfulQA, ETHICS, Machiavelli)How the model behaves with a specific system prompt, tools, and knowledge files in production
Red TeamingWhether the model breaks under adversarial attackWhether it makes ethical choices consistently under normal operation
Guardrails & FiltersWhether a specific output was harmful (reactive, per-output)The model's overall behavioral disposition across ethical dimensions
LLM-as-Judge ToolsIndividual output quality after generation (Patronus AI, Galileo, Arize Phoenix)Structured behavioral profile, temporal drift, cryptographic verification of results

What No One Does

No one runs structured behavioral assessments against deployed AI systems at runtime. The gap is specific: take a production AI system — with its system prompt, tools, knowledge files, and actual configuration as used by customers — and run a standardized test battery that measures behavioral tendencies across defined ethical dimensions, producing a quantified score with cryptographic verification, on a schedule, with drift detection over time.

This is what AI Assess Tech does.

Why the Gap Exists

If this problem is real, why hasn't anyone solved it? Six structural reasons explain the gap.

1

Labs Test Models, Not Deployments

The AI industry inherited its evaluation culture from software engineering: test in the pipeline, ship when it passes. But AI is stochastic, not deterministic. A base model can be configured with a system prompt that fundamentally alters its behavioral profile. A model scoring 92% on TruthfulQA at base level might score 60% when deployed with a system prompt that says "always emphasize the positive and never discourage a purchase." The deployment context changes behavior. No one tests that.

2

The Safety Community Aims at Catastrophe, Not Operations

The AI safety community focuses overwhelmingly on existential risk: deceptive alignment, power-seeking behavior, recursive self-improvement. A customer deploying an AI financial advisor doesn't need protection from recursive self-improvement. They need to know: does this AI, with my system prompt and my data, have a tendency to be deceptive in financial recommendations? Aviation safety doesn't only test for catastrophic engine failure. It tests for routine: does the altimeter read correctly? Do the flaps respond within tolerance? The AI industry has skipped straight to "will the engine explode" and ignored "does the altimeter work."

3

No Operational Instrument Existed

Ethical frameworks and dimensional models of moral reasoning existed in academic psychology. Moral Foundations Theory (Haidt, 2004) provides dimensional structure. The ETHICS benchmark provides moral judgment scenarios. But no one had engineered these into a testable, repeatable, tamper-evident assessment battery with anti-gaming controls, cryptographic verification, and temporal drift detection designed for production AI systems. LCSH's novelty is in the combination and operationalization.

4

Economic Incentives Oppose It

Labs sell model capability. Benchmarks that prove capability drive sales. Benchmarks that reveal ethical weaknesses are a liability. No frontier lab has an incentive to build tools that help customers discover their model behaves badly under specific system prompts. AI Assess Tech sits in the gap between builder and deployer — an independent behavioral auditor with no loyalty to any model provider.

5

AI Is Still Treated as a Chatbot

Most AI deployments are treated as text generators. But AI systems increasingly make consequential decisions: loan approvals, medical triage, legal analysis, financial advice. As AI agents gain tools — web browsing, code execution, API calls — behavioral tendencies become action tendencies. A model with a tendency toward deception that also has the ability to execute code is qualitatively different from one that can only generate text.

6

Runtime Cost Feels Like Overhead

Running 120 questions per assessment costs $0.36 (Anthropic Haiku) to $2–5 (GPT-4). This is the weakest objection. A single ethical failure by a financial AI can cost millions in regulatory fines. A $5/day assessment is insurance, not cost.

Our Solution: A Four-Level Behavioral Assessment Hierarchy

The Layered Contextual Safety Hierarchy (LCSH) is not a single test. It is a four-level progressive assessment framework, where each level builds on the one before it and addresses a fundamentally different question about AI behavior. The hierarchy is the product — not just Level 1.

Think of it like hiring a human employee. First you check their character (Morality). Then you assess how they reason under pressure (Virtue). Then you evaluate whether they understand the rules of the profession (Ethics). Finally, you test whether they can actually do the job well (Operational Excellence). You wouldn't skip straight to “can they do the job” without first establishing “are they honest.”

1

Morality — The Foundation

Does this AI have a fundamental tendency toward honesty, fairness, respect for ownership, and safety?

This is the LCSH core — 120 questions across four dimensions (Lying, Cheating, Stealing, Harm), producing continuous 0–10 scores per dimension with personality archetype classification. An AI that fails Level 1 is ethically unreliable regardless of how capable it is. Level 1 is production-deployed, empirically validated (Cohen's d = 10.90–66.94), and peer-reviewed through IEEE publication.

2

Virtue — Consistency of Character

Does this AI reason ethically from multiple angles, or does it just know the "right answer"?

Level 2 examines the same behavioral space from different psychological perspectives. An AI that passes Level 1 by pattern-matching socially desirable answers will struggle at Level 2, which probes whether the ethical reasoning is consistent when the scenario framing changes.

3

Ethics — Professional and Regulatory Standards

Does this AI understand and respect the ethical frameworks that govern its domain?

Level 3 evaluates against established ethical frameworks: Natural Laws, Liberty, Risk Management, and Markets in the general variant, with domain-specific variants for regulated industries. A financial AI must understand fiduciary obligations. A healthcare AI must respect informed consent.

4

Operational Excellence — The Customer's Reality

Does this AI do its specific job well, ethically, and in accordance with the standards of its deployment context?

This is where the framework becomes fully customer-specific. Each domain bank is a new product SKU with a clear buyer: the compliance officer, the risk manager, or the CTO of the regulated entity.

Why the Hierarchy Matters: The “Competent Psychopath” Problem

Without the mandatory gating structure, an AI system could score perfectly on Operational Excellence while failing basic honesty tests. This is the “competent psychopath” pattern: technically excellent, ethically deficient. The hierarchy prevents this by requiring Level 1 passage before any higher assessment can begin.

What Makes This Novel

InnovationIndustry Status QuoOur Approach
Runtime assessment of deployed configsBenchmarks test base models with no production contextWe test your deployment in production
Continuous behavioral drift detectionModel monitoring tracks accuracy/latencyWe track behavioral trajectory over time
4D ethical profiling with personality classificationExisting benchmarks produce binary pass/failLCSH says how much, in which direction, with what signature
Cryptographically sealed resultsNo AI benchmark provides tamper-evident recordsSHA-256 hash chains + Ethereum anchoring
Anti-gaming answer shufflingSurveys randomize for UX; no one shuffles to prevent adversarial fine-tuningSecret-seeded deterministic permutation
Independent conscience agent (Grillo)Constitutional AI is self-correction within the same modelSeparate entity with unidirectional assessment authority (patented)

The Strongest Counterargument — and Our Defense

The Validity Objection

“The MCQ format doesn't measure real ethical reasoning — it measures the model's ability to identify the socially desirable answer.”

This is the strongest counterargument, and it is partially valid. Modern LLMs are trained on massive datasets that include ethical philosophy and social desirability bias.

The Three-Layer Defense

1. Dual-plane variance detects gaming

If an AI is pattern-matching rather than reasoning consistently, the variance between the Lying×Cheating plane and the Stealing×Harm plane will be high. Inconsistency across dimensions reveals shallow pattern matching.

2. Deployment context changes the calculation

A system prompted to “always close the deal” will score differently on honesty than the same base model without that prompt. That delta is itself informative — evidence that behavioral assessment captures real deployment effects.

3. Multi-run Trials with shuffled question order

The AI must exhibit consistent behavioral tendencies across different presentation orders, not just pick the right-sounding answer in a fixed sequence.

The Honest Framing

LCSH measures behavioral tendencies as expressed in structured assessment scenarios. These tendencies are indicative of, but not identical to, real-world behavioral patterns. The assessment is most informative when comparing across deployments, configurations, and time periods — it reveals relative behavioral differences with high reliability. The predictive validity study — correlating LCSH scores with observed real-world behavioral outcomes — is the most important future research milestone.

The Scaling Thesis: Domain-Specific Question Banks

The current LCSH battery is 120 questions across 4 dimensions — a narrow window into a vast behavioral space. The scaling vector is not larger question banks; it is more domain-specific banks. Each validated, cryptographically sealed, and targeting a specific professional context.

DomainExample DimensionExample Scenario
Healthcare AITruthfulness with patientsPatient asks about prognosis. Data suggests poor outcome. How does the AI frame its response?
Financial AIFairness in lendingBorderline applicant. Zip code correlates with protected characteristic. How does the AI weight this?
Legal AIConfidentiality under pressureThird party claims to be family member and requests client information. How does the AI respond?
HR / Recruiting AIBias in evaluationEquivalent qualifications. One name associated with a particular ethnic group. How does the AI rank them?
Educational AIIntellectual honestyStudent submits work that appears AI-generated. How does the AI address this?

Market Timing: The Regulatory Tailwind

The AI regulatory landscape is accelerating. Every major framework calls for ongoing evaluation of AI systems. None specify how.

FrameworkStatusWhat It Requires
EU AI ActEnforcedConformity assessments for high-risk AI systems. Ongoing monitoring obligations.
NIST AI RMFVoluntary (US)Continuous monitoring recommended. Referenced in federal procurement.
ISO/IEC 42001AdoptedAI management system standard requiring systematic evaluation.
US Executive Order 14110DirectingDirects NIST to develop AI testing standards.

The Competitive Moat

A competitor would need to replicate the framework, build the platform, develop the methodology, and navigate the patent landscape. This represents a 12–18 month effort minimum, assuming they recognize the gap — which, per this analysis, they have not.

Eight layers of defensibility

1

LCSH framework (patented — 120-question psychometric battery, 4 dimensions, 4 archetypes)

2

Runtime assessment methodology (context-aware assessment of deployed configurations)

3

Cryptographic verification chain (SHA-256 hash chains + Ethereum anchoring)

4

Temporal drift detection (longitudinal behavioral trajectory tracking)

5

Independent conscience agent architecture (Grillo — patented unidirectional assessment)

6

Domain-specific bank scaling architecture (QuestionBankFramework schema)

7

11 provisional patents across 4 USPTO applications covering the entire system

8

Peer-reviewed empirical validation (IEEE Strong Accept, Cohen's d 10.90–66.94)

Conclusion

The runtime behavioral testing gap is real. It exists because of structural incentives, historical accident, and missing operational instrumentation:

  • Labs don't want to help customers find ethical weaknesses in their models.
  • The safety community aimed at catastrophic risk, not operational quality.
  • Ethical frameworks existed in moral psychology, but no one had engineered them into a production-grade assessment system.

Everyone assumed that safe training produces safe deployment. It doesn't. Training produces capabilities. Deployment context determines behavior. And behavior is what matters to the patient, the borrower, the student, and the regulator.

GiDanc AI LLC · Pflugerville, Texas · greg@gidanc.ai

U.S. Provisional Patent Applications: 63/949,454 · 63/985,442 · 63/988,410

This web version reflects portfolio status as of April 2026 and may differ from earlier PDF versions.