Every arrow is a decision the LLM makes. Diamonds show routing logic. Nothing is automatic.
Technical Architecture: Memory-Reasoning Loop
The MeTTaClaw agent operates as a continuous reactive loop. Each cycle, the LLM receives a context packet containing: (1) the system prompt with personality and rules, (2) the current pinned working memory string, (3) the last skill use results, (4) recent conversation history, and (5) any new human message. The LLM then emits up to 5 tool invocations per cycle.
Memory Subsystem Detail
- pin: Writes a single string into working memory. This string is injected into every subsequent context packet. It is volatile: only the most recent pin survives. Used for task state, current goal tracking, and sequence coordination.
- remember: Writes a string into long-term memory (LTM), which is an embedding-indexed vector store. The string is converted to an embedding vector and stored persistently. Retrieval is via cosine similarity search, not exact match.
- query: Semantic search over LTM. The query string is embedded and compared against all stored memories. Returns top-k matches. This is approximate—it can miss relevant memories if the query phrasing diverges semantically.
- episodes: Searches an automatic interaction log by timestamp. These are read-only and cannot be edited.
Chain it: Use the conclusion as a premise in a follow-up metta call (multi-step reasoning). The LLM must manually pass the prior result as a new premise.Discard it: If the result is low-confidence or irrelevant, it simply falls out of context after the current cycle.Act on it: Use the conclusion to inform a send, shell, or other tool invocation.Limitations
- No persistent knowledge graph: Atoms do not accumulate. Each MeTTa call is a fresh slate.
- LTM stores strings not atoms:
remember stores natural language text, not structured MeTTa expressions. To re-use a conclusion formally, the LLM must re-atomize it. - 5\tool limit per cycle: Complex reasoning chains require multiple loop iterations.
- Semantic retrieval is approximate: Query can miss relevant memories if phrasing diverges.