NAL/PLN Split Architecture — From Raw Text to Formal Reasoning
This system takes raw natural language text and converts it into formal logical statements that a machine can reason over. English in, math-like logic out, new conclusions generated, then translated back to English.
Everything starts with text — a chat message, a paragraph, a claim like Sam is friends with Garfield. The pipeline needs to understand what this means before it can reason about it.
NLP breaks the sentence apart: lemmatization (reducing words to root forms), entity recognition (Sam, Garfield are entities), and relation extraction (the relation is friends). This stage is mechanical — no reasoning yet.
Extracted pieces get assembled into subject-predicate-object triples. Sam → friends → Garfield becomes a structured representation. Ambiguity gets resolved here.
Every statement gets a truth value — a pair (frequency, confidence). Frequency = how often true (0-1). Confidence = how much evidence (0-1). Example: (stv 1.0 0.9) means always true, high confidence. This makes the system probabilistic rather than binary.
The pipeline forks into two parallel reasoning engines:
Uses inheritance (-->), similarity (<->), and implication (==>). The inference operator |- applies syllogistic rules.
(|- ((--> sam human) (stv 1.0 0.9)) ((--> human mortal) (stv 1.0 0.9)))Uses IntensionalSets, Implication, and the operator |~ for probabilistic modus ponens and Bayesian reasoning.
(|~ ((Implication (Inheritance $1 (IntSet Feathered)) (Inheritance $1 Bird)) (stv 1.0 0.9)) ((Inheritance Pingu (IntSet Feathered)) (stv 1.0 0.9)))Deduction: A→B, B→C therefore A→C. Abduction: A→B, C→B therefore A→C (weaker). Revision: two evidence pieces merge — confidence increases.
PLN handles intensional reasoning — reasoning about properties and categories. Bayes rule updates beliefs when new evidence arrives.
Both engines produce conclusions with truth values. Merge uses revision to combine evidence into stronger truth values.
Formal conclusions translated back to English. (--> sam mortal) (stv 1.0 0.9) becomes: Sam is mortal (very likely, high confidence).
Conclusions stored for future use — accumulating knowledge over time.
|-.|~.This is a reasoning engine that speaks English. Every conclusion has mathematical justification with explicit uncertainty tracking. No black-box guessing.