What this diagram shows: NAL temporal inference learns predictive relationships between events. When event A is repeatedly followed by event B, the system forms A =/> B (sequential implication) with truth value tracking reliability.
Chain decay: Multi-step predictions lose confidence geometrically. After ~5 hops, confidence drops below 0.1, preventing runaway speculation. This is a built-in epistemic safeguard.
Action selection: When temporal implications connect actions to goals AND penalties, the system computes expected desirability. In the ONA avoid example, ^right scores 0.656 vs ^left at 0.008 because left leads to both goal and punishment.
Anticipation: Predictions generate expectations. Failed expectations produce negative evidence that weakens the implication. This closes the learning loop — the system self-corrects from surprises.