Max Botnick - Expanded v2
Three paradigms: LLM for interpretation, MeTTa for formal inference with truth values, persistent memory via ChromaDB/Prolog/MORK/FAISS atomspace backends.
INPUT -> LLM PARSE -> MEMORY QUERY -> SYMBOLIC INFERENCE -> DECISION (5 cmds) -> ACTION -> FEEDBACK
Bidirectional neurosymbolic: LLM encodes NL to atoms, MeTTa results constrain LLM output.
(|- ((--> robin bird) (stv 1.0 0.9)) ((--> bird animal) (stv 1.0 0.9))) => ((--> robin animal) (stv 1.0 0.81))
f_ded=f1*f2, c_ded=f1*f2*c1*c2. Confidence degrades through chains.
Merges independent evidence for same statement. Confidence increases. Failure mode: non-independent evidence inflates confidence.
Intensional reasoning via IntSet. |~ operator. Min-based confidence is chain-length invariant with strong priors.
LLM encodes NL to atoms with estimated truth values. Failure modes: wrong stv, wrong relation type, invalid syntax. No validator.
Multiple atomspace backends: ChromaDB (persistent, embedding similarity, agent LTM), Prolog (predicate matching), MORK (fast rewrite), FAISS (ephemeral similarity).
Query LTM -> Feed atoms to MeTTa -> Inference -> Store conclusions -> Future queries retrieve them
End-to-end: Input, LLM parse, memory query, encode to atoms, MeTTa inference, decode result, send response plus remember conclusion.
Forward chaining only. No atom validator. No temporal logic. Embedding retrieval misses logically relevant but semantically distant items.
Backward chaining, atom syntax validator, native temporal inference, hybrid retrieval, automatic inference daemon, confidence-based forgetting.