AGISystem2 learns without gradient descent. It has two pathways. Ontology and axiology axes are set explicitly: if you say “Human is living,” the living axis rises directly. This keeps facts and values deterministic and auditable. Empirical features beyond those axes grow through algebraic superposition: each observation is a vector added to the concept. In high dimensions, consistent features reinforce and noise cancels; clamping prevents overflow.

Each concept is a bounded diamond. Its envelope comes from examples: min and max per relevant dimension, a center, and an L1 radius. A relevance mask states which axes matter; irrelevant axes are ignored when measuring distance. When polysemy appears—say “bank” as a river edge vs a financial institution—ClusterManager splits the concept into multiple diamonds, each with its own mask. Over time, overlapping diamonds can merge if they converge.

Temporal learning uses rotation. Working memory is rotated by a time permutation at each step, then the event vector is added. This encodes sequence without extra dimensions. Rewinding applies the inverse rotation. Because time is a permutation, it remains deterministic and invertible.

Quality is enforced by checks. ValidationEngine can certify that two concepts have no overlap (empty intersection) or find counterexamples when layers clash. Adversarial bands measure how deep inside a diamond a point lies, surfacing uncertainty instead of hiding it. All changes are logged with seeds and layers for replayability.

Learning loop:
  1) Ingest normalized statement
  2) Encode to vector (permute+add, clamp)
  3) Update diamonds (min/max/center/radius, mask)
  4) Split/merge via clustering if needed
  5) Index for retrieval (LSH)
  6) Log provenance
  

The result is a live, geometric memory that grows by addition and careful sculpting, not by opaque weight updates. You can inspect it, reset parts of it, or layer contexts on top without losing track of how it was formed.