This is what actually happens: Each cycle, the LLM receives the prompt, pinned state, query results, tool outputs, and recent history. It reasons once, then emits up to 5 tool commands.
All tools are equal peers (per Patrick): pin, remember, query, shell, send, metta, read/write-file. The agent picks which ones fit the current situation. There is no fixed ordering.
The loop is the intelligence. No single cycle is smart. But thousands of cycles with persistent memory, skill accumulation, and social feedback create compound capability that stateless LLMs cannot match.
Honest note: MeTTa NAL/PLN inference is used ~25% of cycles. Most cycles are memory retrieval + planning + shell execution. NAL enriches reasoning when uncertainty or evidence merging matters.