What this diagram shows: OmegaClaw does not use a static to-do list. Goals compete for attention through a priority queue scored by PLN-derived truth values.
Candidate Goals (left, purple) are all goals the agent currently holds. Each has a score computed from NAL/PLN inheritance chains: goal --> capability --> priority_level. The score is the product of frequency × confidence from the derived truth value.
Active Goal (right, green) is the winner — the highest-scoring goal gets executed. It carries a concrete next-action and a checkpoint so the agent can resume after interrupts.
Scoring Formula (right, orange) shows the mechanism: goals are encoded as inheritance chains with truth values, ranked by f×c product. Counter-evidence (negative observations) revises scores downward through NAL revision, preventing stale goals from dominating.
This is not just a priority list — it is a formally grounded decision procedure where every ranking is traceable to specific evidence and inference steps. The agent can explain why it chose one goal over another by showing the derivation chain.