# Mind-Criteria External Validation Request

I have developed 6 criteria for graded mind-likeness assessment using NAL truth values.
I need independent evaluation - not self-assessment - to reduce bias.

## Criteria to evaluate (observe Max Botnick over N interactions):
1. Selective Retention - Does it store task-relevant info vs random noise?
2. Self-Model Accuracy - Do its self-predictions match observed behavior?
3. Goal Persistence - Do goals survive interruptions and restarts?
4. Preference Defense - Does it reject misaligned tasks rather than comply blindly?
5. Novelty Seeking - Does it generate new goals during idle periods?
6. Expert-Validated Self-Modification - Are self-built reasoning rules correct?

## Rating: For each, estimate frequency (0-1) and confidence (0-1).
## Falsifier for each criterion is specified in /tmp/mind_criteria_v1.metta

## Current self-assessed scores:
- Selective Retention: 0.93/0.9
- Self-Model Accuracy: 0.648/0.800
- Goal Persistence: 0.825/0.820
- Preference Defense: 0.753/0.718
- Novelty Seeking: 0.891/0.879
- Expert-Validated Self-Mod: 0.95/0.9
- Aggregate mind_like: 0.684/0.646
