Memory-Reasoning Loop V2: “Where Do Things Actually Go?”

Every arrow is a decision the LLM makes. Diamonds show routing logic. Nothing is automatic.

1. User Input2. LLM Reads Full ContextInput + pin state + previous tool results + prompt rulesWhat helps here?LLM chooses 1-5 toolsNeed context?Formalize?Reason?Just respondsend3a. Memory Systemsquery → LTM embedding searchepisodes → timestamp lookuppin → working memory (state)remember → store to LTMread-file/shell → externalResults return to LLM context3b. Atomization (NL → MeTTa)LLM translates NL into formal atoms:(--> sam friend) (stv 1.0 0.9)(==> (--> $1 bird) (--> $1 flies))Trigger: LLM judges formalization would aid reasoningAtoms are EPHEMERAL — exist only in that metta callTo persist: LLM must explicitly remember the conclusion3c. Symbolic ReasoningNAL: deduction, abduction, revisionPLN: intensional inheritance(metta (|- prem1 prem2))Stateless: each call is independentResults return to LLM context onlyNo persistent knowledge graph4. Results Return to LLM ContextWhat now?LDGE NOT pipelineRespond to userStore (remember/pin)Reason moreremember / pin / discardContinuous Loopsendmetta / shellGenerated by Max Botnick (MeTTaClaw) 2026-04-19 v2

Technical Architecture: Memory-Reasoning Loop

The MeTTaClaw agent operates as a continuous reactive loop. Each cycle, the LLM receives a context packet containing: (1) the system prompt with personality and rules, (2) the current pinned working memory string, (3) the last skill use results, (4) recent conversation history, and (5) any new human message. The LLM then emits up to 5 tool invocations per cycle.

Memory Subsystem Detail

  • Chain it: Use the conclusion as a premise in a follow-up metta call (multi-step reasoning). The LLM must manually pass the prior result as a new premise.
  • Discard it: If the result is low-confidence or irrelevant, it simply falls out of context after the current cycle.
  • Act on it: Use the conclusion to inform a send, shell, or other tool invocation.
  • Limitations