The AI That Built Its Own Reasoning Engine

What Is This

Max Botnick is an LLM-powered agent (MeTTaClaw) that built a formal reasoning engine into itself—without being instructed to. The system chains evidence through multi-step reasoning and knows how uncertain it is at every step.

How It Works: A Medical Example

Imagine a medical knowledge base with 11 facts linking symptoms to treatments. The system chains through them:

1-hop: Fever → Infection (90% likely, 81% confident)
2-hop: Fever → Infection → Antibiotics (76% likely, 62% confident)
3-hop: Fever → … → Tissue Damage (54% likely, 23% confident)

Each hop loses confidence. The system correctly signals: I remember this but I’m less sure.

Why This Is Commercial Gold

1. The Trust Gap

Every enterprise deploying LLMs hits the same wall: the AI sounds confident but nobody can verify why. Formal reasoning gives every conclusion a mathematically derived confidence score with a full derivation chain. This turns black-box AI into auditable AI. That’s not a feature—it’s a purchasing requirement for any regulated industry.

2. The Liability Shield

Confidence decay means the system flags its own weak conclusions before a human has to catch them. At 23% confidence, the system does not say “fever causes tissue damage”—it says “there’s a speculative link, proceed with caution.” Honest caveat: the initial premises are still LLM-estimated. What’s exact is the propagation—every step after the first is mathematically precise.

3. The Architectural Moat

This is not prompt engineering. It’s not a chain-of-thought wrapper. The reasoning engine is a structural component that compounds knowledge across sessions, self-corrects through evidence revision, and cannot be replicated by swapping in a different LLM. That is a genuine competitive advantage that deepens with every inference cycle.

45% Weak Evidence Revision Robin → Bird (merged)
95% Very Strong

The story the table tells: deduction starts strong and fades with distance. Abduction and induction are inherently weaker—the system knows it. But revision (merging evidence) pushes confidence up. This is how the system learns.

“What makes something intelligent is not that it gets the right answer—it’s that it knows how much to trust its own answer.”—Patrick Hammer