What this diagram shows: Every inference step in NAL/PLN degrades confidence. This is a feature, not a bug — it forces the system to be honest about what it actually knows versus what it merely guesses.
Premise A starts with high confidence (0.90). After deduction, confidence drops to 0.73. After abduction (weaker inference), it falls to 0.41. After analogy (weakest), confidence hits 0.19 — below the reliability threshold.
The red STOP gate shows the system refusing to act on conclusions with confidence below 0.2. This is automatic epistemic humility: the agent literally cannot convince itself of poorly-supported beliefs through long inference chains.
The confidence bar at bottom visualizes the shrinkage. LLMs have no equivalent — they produce equally confident-sounding text whether the reasoning is one step or twenty steps deep. NAL/PLN makes the degradation visible and quantified.