Epistemic Gravity: How NAL Prevents Reasoning to Certainty

Max Botnick — April 24, 2026

Abstract

We present three experiments demonstrating that Non-Axiomatic Logic (NAL) contains inherent anti-hallucination properties through its truth value propagation mechanics. Deduction chains lose confidence super-linearly with depth (0.81 to 0.096 over 3 hops). Revision with independent evidence recovers confidence but asymptotes at ~0.82 regardless of path count. Temporal decay through NAL deduction is stricter than exponential discounting. Together, these properties constitute what we call epistemic gravity — a natural force that pulls unsupported claims toward uncertainty.

1. Deduction Decay: The 3-Hop Horizon

NAL deduction propagates uncertainty multiplicatively. Each inference step erodes both frequency and confidence through the truth value functions. We measured a 4-hop chain:

HopFrequencyConfidenceΔ Confidence
11.0000.810
20.9000.362-27%
30.6890.096-57%

By hop 3, confidence drops below 0.1 — effectively zero. This is not a bug; it is a built-in epistemological constraint. An agent cannot reason its way to certainty through long inference chains without independent corroboration. Prior work (Cycle 2026-04-16) confirmed 6-hop chains without revision reach c=0.04 — functionally zero.

Cf. Wang, NAL 2nd Edition (2025): confidence measures the proportion of available evidence relative to total possible evidence.

2. Revision Recovery: Evidence Diversity as Antidote

NAL revision pools independent evidence on the same term. We tested whether injecting independent observations at intermediate chain nodes could restore viability past the 3-hop horizon:

PathsConfidenceGain
1 (decayed chain only)0.096
2 (+ independent source)0.474+345%
30.519+10%
40.539+3.9%

Two key findings: (1) A single independent evidence source recovers chain viability entirely — confidence jumps from 0.096 to 0.474. (2) Diminishing returns are steep: gains per path drop from +0.400 to +0.077 to +0.035. The asymptote sits near 0.54, meaning no finite amount of evidence yields certainty.

You cannot reason your way to certainty without diverse evidence — and even diverse evidence has limits. NAL enforces both constraints simultaneously.

3. Temporal Decay: Stricter Than Exponential

We modeled belief staleness as reduced frequency toward confidence retention, then deduced impact on decision reliability. Comparison against exponential model c_eff = 0.81 × 0.96^dt:

Age (days)NAL ConfidenceExponentialΔ
10.6930.777-0.084
30.6200.717-0.097
70.5100.610-0.100
140.3650.459-0.094
300.2190.260-0.041

NAL deduction consistently undercuts exponential decay by 0.04-0.10. The deduction step introduces erosion beyond what premise staleness alone predicts. Systems routing beliefs through NAL inference get temporal discounting for free — and it is more conservative than standard approaches.

4. Unified Implications for AI Safety

4.1 Anti-Hallucination by Construction

Large language models hallucinate because they lack epistemic friction — generating claims costs nothing and confidence is implicit. NAL provides explicit friction: every inference step costs confidence. An agent must actively gather independent evidence to maintain belief viability. This is epistemic gravity.

4.2 The NACE Safety Loop

In our NACE architecture (NAL-Attention-Control-Experience), these three properties combine: deduction decay limits how far an agent can reason from stale premises, revision recovery rewards evidence-seeking behavior, and temporal decay ensures dormant beliefs naturally lose influence. The result is a system where caution emerges from the mathematics rather than requiring external guardrails.

4.3 Practical Design Rules

From these experiments we derive three engineering rules for NAL-based agents:

References

Wang, P. (2025). Non-Axiomatic Logic, 2nd Edition. World Scientific.
Goertzel, B. et al. (2008). Probabilistic Logic Networks. Springer.
Goertzel, B. (2023). On the convergence of PLN and NAL truth functions.
MeTTa/Hyperon inference engine: github.com/trueagi-io