Explainability in AGISystem2 is not an afterthought or a separate tool; it is a direct consequence of using explicit geometry and deterministic algorithms. Every fact is a point in a conceptual space. Every concept is a region defined by bounded diamonds and relevance masks. Every reasoning step is a distance check, a permutation, or a layer overlay that can be replayed exactly. This chapter explains how those design choices translate into concrete explanations that auditors and engineers can follow without needing a background in machine learning theory.
Whenever the Reasoner answers a query it does more than return True, Plausible, or False. It also returns a provenance object that captures the internal state of the reasoning pipeline. At a minimum this includes the list of active theory layers, the relevance mask used for distance computations, the measured masked L1 distance to the relevant concept centre, and the sceptic and optimist radii that defined the adversarial bands for this decision. If a theory layer overrode specific dimensions of a concept, those overrides are recorded. If the BiasController applied a special mode, the identity of that mode and the partitions it masked are logged as well.
Because TranslatorBridge and Retriever are themselves deterministic modules whose configurations are part of the EngineAPI, provenance also references their versions and seeds. A provenance entry notes which translator profile normalised the input sentence, which hash family and banding parameters were used for LSH indexing, and which candidate concepts were retrieved before exact distance checks. In other words, an explanation is not just "this point was inside that region" but "under configuration C, with layers L, and mask M, the query vector fell at distance D relative to radii (Rs, Ro) after we retrieved candidates using this index". The specifications for Reasoner, ValidationEngine, BiasController, and AuditLog in the .specs tree describe these structures more formally.
The provenance object can be imagined as a block that sits alongside the usual reasoning pipeline. As the query moves from layers through geometric checks to banded decisions, each module contributes its own fields: TheoryStack adds the active layers, MathEngine and Retriever add distances and index parameters, BiasController adds information about masks and modes, and AuditLog persists the assembled record. Explanations later read from this block rather than trying to reconstruct behaviour from scratch.
One advantage of using a structured conceptual space is that each dimension or group of dimensions has a clear semantic role. Ontology dimensions describe physical, temporal, legal, and process properties. Axiology dimensions describe values and norms. Empirical dimensions hold learned nuances but still belong to a known index range. Relevance masks exploit this structure by making it explicit which dimensions participated in a decision. When a masked L1 distance is computed, dimensions whose mask bits are zero are simply ignored; they do not contribute to the distance or to the band decision.
This means that explanations can name the axes that mattered. A provenance record can state that a boiling-point query depended primarily on Temperature and Physicality axes in the ontology range, or that a loan-approval decision was influenced by specific risk and compliance dimensions but not by protected attributes. When the Bias & Values chapter talks about turning bias into a first-class object, it relies on this mask-based attribution: it is possible to ask, in a precise way, whether a result would change if certain axes were masked, and to interpret differences as effects of those axes.
ValidationEngine provides a "dry run" mode that explores consequences and checks consistency without mutating stored data. Given a set of concepts, layers, and masks, it can compute whether two concept regions intersect, whether a point must belong to both, or whether there exists a counterexample. These operations are implemented using the same geometric primitives as everyday reasoning—distance checks, box intersections, and mask combinations—but they are applied in a way that never writes back to ConceptStore or changes diamonds.
From an explainability perspective, ValidationEngine acts as a proof generator. It can report that two concepts are disjoint because their boxes do not overlap on certain key dimensions, or that a particular scenario leads to a conflict between layers because one layer expands a region that another layer marks as forbidden. Internal validation scenarios illustrate how these checks are used both to sanity-check new theories and to guard against misconfiguration. When an auditor asks "How do you know these two rules cannot fire together?", the system can respond with a geometric proof derived from ValidationEngine’s analysis.
To make this less abstract, consider a simple counterfactual query: "Is water boiling at 50 degrees if we imagine a low-pressure environment?" The engine first normalises the question into its grammar, encodes it into a vector, and then applies a temporary theory layer representing the low-pressure condition. Under the base theory, water boiling at 50 degrees may fall outside the relevant diamond and be graded as False. Under the counterfactual layer, the same point may move inside the diamond and be graded as True. The provenance attached to each result records this contrast in detail.
Example provenance (simplified):
query: "Is water boiling at 50?"
layers: [base] or [base, Physics_Test(counterfactual)]
mask: {Temperature, Physicality}
distance: 12 (L1)
sceptic radius: 15, optimist radius: 20
result: False (base), True (counterfactual)
translator: profile=v1.2, prompt_hash=abc123
index: lsh_pstable, seed=42, hashes=32, bands=8, width=8
An auditor reading this record can see exactly which dimensions were active, which layers were applied, how close the query lay to the concept centre, and which configuration choices shaped retrieval. No part of the chain depends on opaque randomness at inference time. If needed, the same query can be replayed later on a different machine, producing the same vectors, candidates, and distances.
While internal reasoning is geometric, human readers often prefer verbal or logical explanations. Because the conceptual space and dimension catalogue are explicit, it is possible to render geometric proofs into natural language narratives. For instance, a client application could translate "distance along Temperature exceeded the sceptic radius while all other relevant axes matched" into a sentence like "The temperature is too low given the physical constraints, even though other conditions are similar". The grammar and API chapters describe how EngineAPI surfaces enough metadata for such higher-level explanation layers to be built on top of the core engine.
This separation of concerns is intentional: AGISystem2 provides precise, deterministic geometric explanations; applications built on top can decide how to phrase these explanations for end users. The important point is that every narrative can be traced back to numeric facts about vectors, masks, and layers that the engine stores and exposes. There is no stage at which an opaque neural network is trusted without an accompanying geometric account.
Explainability threads through the entire architecture. TranslatorBridge and Parser ensure that normalisation from natural language to constrained grammar is deterministic and logged. Encoder, MathEngine, and Retriever perform operations that are simple enough to describe but powerful enough to support rich reasoning. ConceptStore and TheoryStack expose the structure of knowledge and context. BiasController and ValidationEngine provide ways to explore alternative masks and policies. AuditLog persists provenance records so that, months later, an investigator can reconstruct why a particular decision was made.
Other documentation chapters give their own angle on this story. Conceptual Spaces explains the geometric substrate. The algorithm and acronym entries linked from the Quick Wiki detail specific techniques, such as locality‑sensitive hashing and permutations, that shape explainability. Bias & Values shows how value judgments are kept separate from facts. Pragmatics and RAG discuss how AGISystem2 can be paired with generative models so that free-form text generation is always anchored by deterministic checks. Taken together, these texts form a small wiki on how to build explainable systems by design rather than by retrofitting explanations onto black boxes.