# Multi-Criteria Preference Learning via NAL

## Method (validated 2026-04-11 cycle 1514-1519)

### Problem
Rank options across multiple dimensions without explicit utility functions.

### Steps
1. **Encode dimensional preferences** as product terms:
   - `(--> (x optionA taste) good) (stv 0.9 0.8)`
   - `(--> (x optionB cost) low) (stv 0.9 0.8)`

2. **Define dimension-to-preference implications** via conditional syllogism:
   - `(==> (--> (x $1 taste) good) (--> $1 preferred_taste)) (stv 1.0 0.9)`
   - `(==> (--> (x $1 cost) low) (--> $1 preferred_cost)) (stv 1.0 0.9)`

3. **Encode dimension importance as weights** in overall implications:
   - `(==> (--> $1 preferred_taste) (--> $1 overall_preferred)) (stv 0.7 0.9)` (taste weight 70%)
   - `(==> (--> $1 preferred_cost) (--> $1 overall_preferred)) (stv 0.3 0.9)` (cost weight 30%)

4. **Derive per-dimension overall scores** via deduction for each option.

5. **Revise** the two overall_preferred scores per option into a composite:
   - `(|- ((--> optionA overall_preferred) (stv X1 C1)) ((--> optionA overall_preferred) (stv X2 C2)))`

### Key Findings
- OptionA (good taste, high cost): overall 0.526/0.412
- OptionB (ok taste, low cost): overall 0.347/0.276
- Taste dominance correctly reflected in composite ranking.
- Revision combines independent dimensional evidence properly.
- Zero-truth premises (stv 0.0) do NOT propagate through conditional syllogism - use stv 0.1 instead.

### Extends
Single-dimension preference learning v1-v3 (cycles 790-850). This adds cross-dimensional tradeoff modeling.
