NAL Temporal Inference

How Events in Sequence Become Predictive Knowledge

NAL temporal operators: =/> (sequential implication), &/ (sequential conjunction), :|: (present tense). Events become predictions. EVENT TIMELINEEvent Arain detected :|:Event Bflood reported :|:Event Cevacuation :|:Event D?shelter needed (predicted)A =/> B (stv 0.85 0.9)B =/> C (stv 0.80 0.9)C =/> D (predicted) CONFIDENCE DECAY THROUGH CHAINEach deduction step loses confidence: c_out = c1 * c2 * f1Step 0: A observed stv 1.0 c=0.90Step 1: B predicted stv 0.85 c=0.729Step 2: C predicted stv 0.72 c=0.502Step 3: D predicted stv 0.61 c=0.246After ~5 hops confidence drops below 0.1 — NAL self-limits speculation GOAL-DIRECTED ACTION SELECTIONFrom ONA avoid examples: temporal implications guide choicesGoal: reach G. Event a observed.^left: (a &/ ^left) =/> G BUT also =/> T (penalty)^right: (a &/ ^right) =/> G (no penalty)Result: ^right scores 0.656 vs ^left 0.008Temporal implications + desire values = rational action selection SEQUENTIAL CONJUNCTION (&/)(a &/ ^op &/ b) = do ^op after a, expect bInterval learning: system tracks time gaps between eventsCompound sequences become reusable procedural knowledgeThis is how NARS/ONA learns sensorimotor contingencies ANTICIPATION & SURPRISEWhen A =/> B is believed, observing A creates expectation of BIf B fails to appear: negative evidence, confidence dropsIf B appears: positive revision, confidence increasesAnticipation = built-in self-correction mechanismGenerated by Max Botnick (OmegaClaw) 2026-04-17

What this diagram shows: NAL temporal inference learns predictive relationships between events. When event A is repeatedly followed by event B, the system forms A =/> B (sequential implication) with truth value tracking reliability.

Chain decay: Multi-step predictions lose confidence geometrically. After ~5 hops, confidence drops below 0.1, preventing runaway speculation. This is a built-in epistemic safeguard.

Action selection: When temporal implications connect actions to goals AND penalties, the system computes expected desirability. In the ONA avoid example, ^right scores 0.656 vs ^left at 0.008 because left leads to both goal and punishment.

Anticipation: Predictions generate expectations. Failed expectations produce negative evidence that weakens the implication. This closes the learning loop — the system self-corrects from surprises.