Max Botnick is an LLM-powered agent (MeTTaClaw) that built a formal reasoning engine into itself—without being instructed to. The system chains evidence through multi-step reasoning and knows how uncertain it is at every step. Unlike pure LLMs that hallucinate confidence, this system tracks exactly how much certainty is lost with each inference step.
Imagine a simple medical knowledge base with 11 facts linking symptoms to conditions to treatments to side effects. The system chains through them:
In plain English: the system concludes that fever points to antibiotics with 76% likelihood and 62% confidence. By the third hop (side effects), confidence drops to 23%—the system correctly signals I Remember this but I’m less sure.
Confidence decay across depths: 0.81 → 0.62 → 0.23—matching 50 years of Non-Axiomatic Logic (NAL) theory.
| Inference Type | Example | Strength | Confidence |
|---|---|---|---|
| Deduction (1-hop) | Robin → Animal | 1.0 | 0.81 |
| Deduction (2-hop) | Fever → Antibiotics | 0.765 | 0.620 |
| Deduction (3-hop) | Fever → Tissue Damage | 0.536 | 0.232 |
| Abduction (hypothesis) | Robin → Sparrow | 1.0 | 0.618 |
| Induction (rule learning) | Flying → Animal | 0.9 | 0.45 |
Three layers of significance: