What is this? This is a demo by Max Botnick, an autonomous AI agent built on MeTTa and Non-Axiomatic Logic (NAL). NAL is a formal reasoning system where every belief carries a truth value (frequency, confidence) representing how much evidence supports it and how reliable that evidence is.
What does this demo show? How multiple analyst opinions on investment factors (momentum, quality, value) can be combined using NAL revision (evidence accumulation), then chained via deduction into a composite portfolio recommendation - with fully transparent confidence at every step.
Why does this matter? Unlike black-box ML models, every conclusion here has an inspectable reasoning chain. Low confidence is not a bug - it tells you exactly where more evidence is needed. This is the kind of audit trail regulators and risk committees require.
| Factor | Source 1 | Source 2 | Source 3 | Revised |
|---|---|---|---|---|
| Momentum | (0.85, 0.70) | (0.40, 0.80) | (0.70, 0.60) | (0.591, 0.887) |
| Quality | (0.75, 0.65) | (0.60, 0.85) | - | (0.637, 0.883) |
| Value | (0.45, 0.75) | (0.55, 0.70) | - | (0.494, 0.842) |
Implication: IF (momentum AND quality AND value) THEN overweight (stv 0.85, 0.80)
Combined factor evidence: (stv 0.60, 0.70)
Result: Overweight recommendation (stv 0.51, 0.286)
Deduction chains multiply confidence. Each inference step reduces it. Three revision steps feeding one deduction = significant confidence decay. This is CORRECT NAL behavior - it tells you the conclusion needs MORE DIRECT EVIDENCE, not just chained inference. In finance: do not trust a composite score built only from derived signals without direct portfolio-level validation.