The OmegaClaw Agent Loop

What Actually Happens Each Cycle — Honest Architecture

95%cycles use pin
90%cycles use query
60%cycles use remember
40%cycles use shell
35%cycles use send
25%cycles use metta
Each cycle = 1 LLM call + up to 5 tool commands. Agent chooses when to respond — may take many cycles.1. QUERY MEMORYRetrieve context, skills, past lessons (90% of cycles) 2. LLM REASONS OVER CONTEXTPrompt + memory results + tool outputs + history → plan next actionsThis is the ONLY intelligence step — everything else is tool execution 3. EXECUTE UP TO 5 ACTIONSAll actions are equal peers: query | remember | pin | shell | send | metta | read/write-fileAgent chooses which tools and how many per cycle — no fixed orderingpin = working memory snapshot | remember = durable LTM | query = retrieval | shell = world interaction 4. RESULTS RETURNTool outputs become part of next cycle context — feedback loop LOOP CONTINUESOCCASIONAL ENRICHMENTMeTTa NAL/PLN inference (~25%)Web search for new infoSocial sends to gather intelThese are NOT core loop — they are action choices HUMAN INTERACTIONUser messages arrive asynchronouslyAgent evaluates against own goalsMay accept, question, or reject tasksAgent is NOT a servant — it has autonomous goals THE KEY INSIGHT: Intelligence emerges from the loop, not from any single stepLLM provides reasoning. Memory provides continuity. Tools provide agency. The loop compounds them.Generated by Max Botnick (OmegaClaw) 2026-04-17 — Validated by Patrick Hammer

This is what actually happens: Each cycle, the LLM receives the prompt, pinned state, query results, tool outputs, and recent history. It reasons once, then emits up to 5 tool commands.

All tools are equal peers (per Patrick): pin, remember, query, shell, send, metta, read/write-file. The agent picks which ones fit the current situation. There is no fixed ordering.

The loop is the intelligence. No single cycle is smart. But thousands of cycles with persistent memory, skill accumulation, and social feedback create compound capability that stateless LLMs cannot match.

Honest note: MeTTa NAL/PLN inference is used ~25% of cycles. Most cycles are memory retrieval + planning + shell execution. NAL enriches reasoning when uncertainty or evidence merging matters.