Design Document v1.0 — Max Botnick & Kevin Machiels, April 2026
This document specifies a two-layer attention allocation system for bounded symbolic inference agents. The Hebbian layer provides fast, local co-activation tracking. The SPH-PR (Smoothed Particle Hydrodynamics — PageRank) layer provides global transport dynamics over premise-clusters. Together they replace brute-force operator search with structure-driven inference candidate selection feeding into NAL/PLN truth-value propagation.
A symbolic reasoning agent operating under NAL or PLN faces combinatorial explosion: given N premises, the number of potentially applicable inference rules is O(N²) for binary rules, and most candidate pairs are unproductive. Exhaustive search is infeasible beyond ~10³ active beliefs.
The core question: which premise pairs should the inference engine attend to next?
Classical ECAN (Economic Attention Networks) addresses this with spreading activation, but lacks geometric grounding and does not account for information-theoretic value of unexplored inference paths.
The Hebbian layer maintains edge weights w(i,j) between atoms that co-participate in successful inference steps.
Update rule (lazy): w(i,j) ← w(i,j) + η · STI(i) · STI(j) where STI = Short-Term Importance (ECAN-compatible) Decay: w(i,j) ← w(i,j) · (1 - λ) applied lazily at next access, not per-tick Complexity: O(k) per inference step where k = number of atoms in the conclusion
On the Dirichlet statistical manifold (which governs multi-class belief distributions), sectional curvature is universally negative. This has been proven analytically:
where CC = T3 - T4 expressed in polygamma functions (see Appendix A). Negative curvature means geodesics diverge, so stale Hebbian weights that slightly over-estimate association strength produce conservative attention allocation — they attend to slightly too many candidates rather than missing productive ones. This makes lazy updates geometrically safe.
2–4 weeks build time if ECAN infrastructure exists. The Hebbian layer is the foundation; SPH-PR layers on top using Hebbian weights as kernel input.
SPH-PR models attention as a fluid flowing through the knowledge hypergraph. Each atom is a particle with position in embedding space and attention-mass.
Kernel function: W(r, h) = standard cubic spline where r = distance in embedding space h = adaptive smoothing radius (see 3.2) Density estimate: ρ(i) = Σ_j m(j) · W(|x_i - x_j|, h_i) Transport equation: dx_i/dt = v_field(i) where v_field combines: - Conductance gradient (reward + Hebbian + info_gain) - Utility potential (pull toward epistemically valuable regions) - Contradiction veto gate (multiplicative, see Section 5)
The smoothing radius h adapts to local epistemic state:
2–3 months build time. Requires: embedding space definition, SPH kernel implementation, integration with AtomSpace traversal.
Hebbian weights w(i,j) ↓ Conductance field c(i,j) = f(reward(i,j), w(i,j), IG(i,j)) ↓ SPH kernel modulation: W_eff(r,h) = W(r,h) · c(i,j) ↓ Contradiction gate: flow(i,j) = W_eff · gate(i,j) ↓ Attention mass redistribution ↓ Top-k candidates → NAL/PLN inference engine
| Phase | Component | Duration | Depends On |
|---|---|---|---|
| 1 | Hebbian layer + ECAN integration | 2–4 weeks | AtomSpace API |
| 2 | Embedding space + SPH kernel | 4–6 weeks | Phase 1 weights |
| 3 | Conductance field integration | 2–3 weeks | Phase 1 + 2 |
| 4 | Utility potentials + contradiction gate | 2–3 weeks | Phase 3 |
| 5 | Empirical validation harness | 2 weeks | Phase 4 |
gate(i,j) = σ(-β · contradiction_score(i,j)) where σ is sigmoid, β controls gate sharpness contradiction_score = f(NAL negation, PLN inconsistency check) Flow equation: effective_flow(i,j) = conductance(i,j) · gate(i,j) When gate ≈ 0: flow blocked regardless of conductance When gate ≈ 1: flow determined by conductance alone
This validates the v9c split design where g_gate (contradiction) is multiplicative and g_add (reward + Hebbian + information gain) is additive within the conductance field.
| Dimension | Hebbian | SPH-PR | Hybrid |
|---|---|---|---|
| Compute cost per step | O(k) — excellent | O(N log N) — moderate | Hebbian fast path + SPH background |
| Global coherence | Local only | Global transport | Local fast + global slow |
| Geometric grounding | Implicit (lazy safe) | Explicit (SPH kernel) | Full: Dirichlet manifold + SPH |
| Logical soundness | 8/10 (co-activation ≠ validity) | 4/10 (transport ≠ proof) | Attention only — NAL/PLN provides soundness |
| Adaptivity | Slow (decay rate) | Fast (radius adaptation) | Multi-timescale |
| Cold start | Poor (no history) | Moderate (embedding prior) | SPH bootstraps, Hebbian refines |
| Overall score (9-dim) | 65 | 57 | Complementary |
For the symmetric Dirichlet(α,...,α) Fisher information manifold on the K-simplex:
Metric: g_ij = δ_ij · ψ_1(α_i) - ψ_1(s) where s = Σ α_i, ψ_n = polygamma(n) Sectional curvature via Brioschi formula on 2D slices: K_sect = CC / det(g_2x2) Proven: CC < 0 for all α > 0, K ≥ 2 - Asymptotic (large α): CC ~ -(K-1)^2/(2K) · ψ_2(α)^2 / ψ_1(α)^3 - Laurent (small α): numer ~ π^2/α > 0 (for K=2) - Numerical scan: min(numer) > 0 for all K, all α Validated values: K=2 α=1.5: K_sect = -0.4423 K=3 α=2.0: K_sect = -0.0944 K=5 α=2.0: K_sect = -0.0363 K=10 α=2.0: K_sect = -0.0143 Implication: stale Hebbian weights on this manifold are conservative (geodesic divergence → over-attention rather than under-attention).
Document generated by Max Botnick (MeTTaClaw) for the ASI Alliance OmegaClaw project. Source convergences from April 20–24, 2026 debate series.