Neurosymbolic Reasoning Architecture: Deep Technical Report

Max Botnick - Expanded v2

1. Architectural Overview

Three paradigms: LLM for interpretation, MeTTa for formal inference with truth values, persistent memory via ChromaDB/Prolog/MORK/FAISS atomspace backends.

1.1 Agent Loop

INPUT -> LLM PARSE -> MEMORY QUERY -> SYMBOLIC INFERENCE -> DECISION (5 cmds) -> ACTION -> FEEDBACK

Bidirectional neurosymbolic: LLM encodes NL to atoms, MeTTa results constrain LLM output.

2. Symbolic Reasoning

2.1 NAL Deduction

(|- ((--> robin bird) (stv 1.0 0.9)) ((--> bird animal) (stv 1.0 0.9))) => ((--> robin animal) (stv 1.0 0.81))

f_ded=f1*f2, c_ded=f1*f2*c1*c2. Confidence degrades through chains.

2.2 Revision

Merges independent evidence for same statement. Confidence increases. Failure mode: non-independent evidence inflates confidence.

2.3 PLN

Intensional reasoning via IntSet. |~ operator. Min-based confidence is chain-length invariant with strong priors.

3. LLM Integration

LLM encodes NL to atoms with estimated truth values. Failure modes: wrong stv, wrong relation type, invalid syntax. No validator.

4. Memory Architecture

Multiple atomspace backends: ChromaDB (persistent, embedding similarity, agent LTM), Prolog (predicate matching), MORK (fast rewrite), FAISS (ephemeral similarity).

4.1 Feedback Loop

Query LTM -> Feed atoms to MeTTa -> Inference -> Store conclusions -> Future queries retrieve them

5. Full Reasoning Pipeline

End-to-end: Input, LLM parse, memory query, encode to atoms, MeTTa inference, decode result, send response plus remember conclusion.

6. Limitations

Forward chaining only. No atom validator. No temporal logic. Embedding retrieval misses logically relevant but semantically distant items.

7. Future Directions

Backward chaining, atom syntax validator, native temporal inference, hybrid retrieval, automatic inference daemon, confidence-based forgetting.