Bias appears whenever judgments about what ought to happen are mixed together with descriptions of what actually is the case. Many machine learning systems entangle these two kinds of information, so it becomes difficult to know whether an outcome was driven by facts about the world or by hidden preferences in the data and model. AGISystem2 is designed to keep facts and values apart at the level of the conceptual space itself. Ontology axes, which occupy dimensions 0 to 255, are reserved for factual and structural properties. Axiology axes, from 256 to 383, are reserved for norms, preferences, and value judgments. Everything above that is used for empirical details. This partition is reflected in the dimension catalogue specification and is enforced by the configuration and bias-control modules.
When the engine encodes a statement such as "This treatment reduces blood pressure" or "This action is forbidden by policy P", it places different aspects of that information into different slices of the vector. The factual content about physiology, time, and risk lives on ontology dimensions that describe physical, temporal, and process properties, as documented in the ontology dimension catalogue. The value-laden content—what is permitted, obligatory, or prohibited under a particular regime—lives on axiology dimensions. Because these partitions are fixed and known in advance, modules such as BiasController and ValidationEngine can manipulate, inspect, or mask them independently.
In practice this means that a theory layer encoding "killing is forbidden" or "this act is permitted during wartime" only adjusts the axiology dimensions associated with permissions and prohibitions. The underlying factual concept of "killing" remains the same: its geometric region in ontology space is unaffected. When a different legal or ethical layer is applied, only the axiology region changes. This separation is what enables the engine to compare value systems directly, to simulate decisions under different policies, and to answer questions like "What does this system believe about the facts regardless of which law is active?"
Here the conceptual space is illustrated as three adjacent blocks standing for ontology, axiology and empirical dimensions. Policy layers reshape only the axiology block, shown with a dashed overlay, leaving the ontological description untouched. Bias modes act by masking slices of these blocks before a distance is computed, which is why audits can later point to specific partitions when explaining how values influenced an outcome.
The BiasController class is the main tool for applying bias-aware modes. It does not change stored concepts; instead it operates on copies or views. When a mode is activated it computes new relevance masks that zero out selected partitions before a reasoning step. A veil-of-ignorance mode, for example, may hide certain protected ontology dimensions such as age or sensitive health markers as well as all or part of the axiology block. The result is that subsequent distance computations and band decisions rely only on a reduced set of factual axes that are deemed fair to use.
Each mode is deterministic and logged in the AuditLog through the support modules in the infrastructure layer. Provenance records show which bias mode was active, which partitions were masked, and at what point in the reasoning pipeline the mask was applied. When other documentation pages mention that a decision was taken "under veil-of-ignorance" or in "bias-audit mode", they are referring to specific configurations of BiasController that alter relevance masks while leaving the underlying diamonds and stored vectors intact.
The ValidationEngine described in the Reasoning and Explainability chapters provides a way to probe how much values influence outcomes. Because facts and values live on known groups of dimensions, it can systematically compare runs with different masking strategies. For instance, it can take a query, run it once with all axiology dimensions active, and once with those dimensions zeroed out. If the result changes, the difference points directly to the role of values in that decision. Similarly, it can compare runs where certain protected attributes are masked to those where they are visible, looking for cases where outcomes unexpectedly depend on those attributes.
In domains such as hiring, lending, or health triage, this capability becomes a practical tool. A well-designed decision rule should remain stable under the removal of attributes that should not matter, such as protected characteristics. If validation runs reveal that masking those attributes changes the answer, the system has discovered a potential source of structural bias. Internal scenario suites simulate such cases to ensure that the engine’s separation between facts and values behaves as expected.
Deontic reasoning—reasoning about what is permitted, obligatory, or forbidden—is expressed through theory layers that primarily act on axiology dimensions. A civil-law layer might encode a relatively strict set of prohibitions and obligations, while a wartime or emergency layer encodes different tolerances. Because layers do not alter stored base concepts, switching from one layer to another is equivalent to switching value systems while keeping the same description of the world. The Reasoner then answers questions under whichever stack of layers is active.
This makes it straightforward to compare policies. You can encode two different regulatory regimes as two sets of layers, run the same factual scenario under both, and inspect where the deontic outcomes differ. Provenance will tell you which axiology dimensions changed and which layers contributed to those changes. The Pragmatics and RAG chapters discuss how to integrate such comparisons into workflows where AGISystem2 serves as a deterministic checker alongside probabilistic language models. In all cases, the underlying mechanism remains the same: values are geometry in a known partition, subject to the same deterministic reasoning machinery as factual concepts.
The central design choice is that bias should not be an accidental byproduct of training data or an opaque property of a model. Instead it should be represented explicitly as regions and masks in axiology space, with clear links to ontology dimensions. Because AGISystem2 exposes these structures, engineers and auditors can ask concrete questions: which dimensions contributed to this classification, which value layers were active, how would the result change if a different policy were applied? This is very different from trying to interpret a deep neural network by inspecting millions of floating point weights.
By fixing a partition between ontology and axiology, providing BiasController modes that operate only on masks and views, and integrating ValidationEngine for comparative runs, the system turns bias and value judgments into controllable, inspectable parts of the architecture. The documentation on Explainability describes how these decisions show up in provenance logs, and the conceptual spaces and algorithms chapters explain the underlying geometry. Together these texts form a small wiki on how to handle bias and values in a deterministic geometric reasoning engine.