All reasoning in AGISystem2 is implemented as geometric movement inside the conceptual space described in the dedicated chapter on conceptual spaces. Instead of symbolic rules firing on predicates or neural networks sampling the next token, the system manipulates points and regions in a high-dimensional int8 space. The Reasoner module orchestrates these operations, working with TheoryStack, ConceptStore, MathEngine, TemporalMemory, and BiasController. This chapter walks through the main reasoning styles—deduction, induction, abduction, analogy, counterfactuals, temporal and deontic reasoning—and explains how each becomes a particular pattern of distance checks, permutations, and layer overlays.

Deduction: Checking Whether a Point Lies Inside

Deductive reasoning in this engine answers questions of the form "Does this fact belong to this concept under the current context?" The query is first encoded into a high-dimensional vector, just like any other statement. The TheoryStack composes the base concept’s diamonds with any active theory layers, producing a runtime diamond that reflects both factual knowledge and the currently selected value system. The MathEngine then computes a masked L1 distance from the query point to the centre of that runtime diamond, but only after verifying that the point falls within the diamond’s per-dimension min and max bounds. If the query falls outside the box the result is immediately treated as False.

Rather than returning a single yes or no, the Reasoner interprets the distance through adversarial bands, as described in more detail in the dedicated concept entry linked from the Quick Wiki. A sceptic radius marks where membership is considered robustly True, while an optimist radius marks a wider region of Plausible membership. Outside both radii the answer is False. These thresholds are configured per profile and logged in provenance so that later audits can see precisely why a query was classified in a particular band. When you call the high-level ask method, the engine combines this geometric judgement with the symbolic verdict based on explicit facts; truth in the returned object reflects the symbolic reasoning, while band summarises where the query fell relative to the sceptic and optimist radii. This turns deduction into a deterministic, measurable operation: "Under layers L, with mask M, at distance D relative to radii (Rs, Ro), the query belongs or does not belong."

Induction: Sculpting Concepts from Examples

Inductive reasoning is about learning general concepts from specific examples. In AGISystem2, when new observations that refer to the same conceptual label arrive, the system updates the corresponding diamonds rather than merely appending new points. The ClusterManager computes per-dimension minima and maxima across all relevant points, derives a centre, and adjusts the L1 radius to cover the farthest consistent example. If the new examples remain close to the existing region, this process simply tightens or slightly expands the diamond.

When examples begin to form distinct groups that are too far apart, the clustering logic may split a single conceptual label into multiple diamonds, each capturing a different meaning or sub-type. For instance, references to "bank" in financial contexts and in river geography will eventually occupy very different parts of the space, prompting a split. Internal clustering scenarios verify that such splits preserve determinism and that retrieval continues to favour the most geometrically appropriate sense. From the perspective of the conceptual spaces chapter, induction is the ongoing process of fitting regions around clouds of points as new data arrives.

Abduction: Inverting Relations to Explain Observations

Abductive reasoning seeks explanations. Given an observation and knowledge of causal or relational structure, the engine searches for the best hypothesis that would make the observation expected. Geometrically, if we observe a point corresponding to "Smoke" and we know there is a CAUSES relation, the Reasoner applies the inverse permutation associated with CAUSES to the observation vector. This inverse binding operation, implemented via MathEngine and RelationPermuter, produces a vector that lives in the space where causes of such observations reside.

The Retriever then searches for concept diamonds near this hypothesised point using locality-sensitive hashing followed by exact masked L1 checks. If "Fire" is the closest relevant concept under the current theory stack, the Reasoner can present it as the best explanation, along with a provenance trail that shows which relation was inverted and what distance thresholds were used. Internal abductive scenarios exercise this pattern: starting from effects, inverting relation permutations, and retrieving plausible causes.

Analogy: Translating Vectors

Analogical reasoning asks questions of the form "A is to B as C is to what?" In the engine this is implemented as a vector translation. First the Reasoner computes the difference between the encodings of A and B. This difference vector captures, in aggregate, how B differs from A in the conceptual space. Then that delta is added to the vector for C to produce a predicted location for D, all done with saturated arithmetic in MathEngine. This operation is reminiscent of the classic word-embedding analogy "king - man + woman ≈ queen", but here it operates on structured encodings that already include relation permutations and masks.

Once the predicted point is obtained, Retriever is asked to find the nearest concept diamonds. The candidate with the smallest masked L1 distance within appropriate bands is proposed as the answer to the analogy. Carefully crafted mini spaces illustrate this behaviour on small examples. Because all components are deterministic and bounded, running the same analogy under the same configuration always yields the same candidate set and distances, making analogical reasoning auditable rather than mysterious.

Counterfactuals and Non-Monotonic Logic

Counterfactual reasoning explores "what if" worlds that differ from the base theory in controlled ways. Instead of mutating stored concepts, AGISystem2 uses TheoryLayer and TheoryStack to model such variations. To answer a query like "If gravity were weaker, would this orbit be stable?", the engine clones the current theory stack, pushes a temporary layer that modifies only a subset of dimensions—for example, reducing values on certain physicality or force axes—and then runs the usual deduction steps in that altered context. After computing the result, the temporary stack is discarded and the base theory remains unchanged.

This approach supports non-monotonic logic, where adding new information can retract previous conclusions. Exceptions such as "Birds typically fly, but penguins do not" can be modelled as layers that override specific regions of the space. The ValidationEngine can compare results across stacks to discover conflicts, while provenance records which layers were active for each answer. Internal narrative and counterfactual scenarios exercise these mechanisms by constructing situations where local overrides should win over default rules without collapsing the entire conceptual structure.

Temporal and Causal Sequences

Time and causality are encoded using permutations and rotations rather than bespoke temporal logics. TemporalMemory maintains rotation tables that advance working memory one tick at a time. On each tick the current memory vector is rotated and then combined with the encoding of the new event. This pattern ensures that information about more recent events occupies different parts of the vector than older events, roughly capturing order and recency. To ask about the past, the engine applies inverse rotations, effectively rewinding the state before applying a query.

Causal relations such as CAUSES and CAUSED_BY are handled through dedicated permutations in RelationPermuter, but they obey the same geometric principles as other relations. When the Reasoning or Bias chapters refer to temporal or deontic bands, they are describing configuration choices for how strict these temporal and causal checks should be. Internal compliance and deontic scenarios combine temporal memory with value layers to verify that changing a time window or switching a regulatory regime only affects the intended parts of the space.

Deontic Reasoning and Values

Deontic reasoning deals with permissions, obligations, and prohibitions. In AGISystem2, these notions live on the axiology dimensions of the conceptual space. Relations such as PERMITS, OBLIGATES, and PROHIBITS are encoded as permutations and value shifts on those dimensions. When a deontic rule applies, it creates or adjusts a region in axiology space that says, for example, "under law L, action A is forbidden in context C". When a query is evaluated, the Reasoner checks both ontological membership (is this situation factually inside the relevant concept diamonds?) and axiological membership (under which layers is this action permitted or forbidden?).

Because values are separated from facts at the level of dimensions, the BiasController can provide modes that either emphasise or suppress certain value systems. For instance, a veil-of-ignorance mode might hide protected attributes and some axiology dimensions to simulate decisions made without knowledge of sensitive characteristics. The Bias & Values chapter explains this separation in detail, and internal audit tests verify that masking or swapping value layers produces predictable differences in deontic outcomes without corrupting factual reasoning.

Sparsity, Masks, and Validation

Throughout all these reasoning modes, relevance masks act as explicit attention mechanisms. They determine which dimensions participate in each distance computation and thus which aspects of the world influence the answer. The ValidationEngine can run "dry" reasoning passes that explore the logical consequences of certain masks and layers without changing stored concepts, searching for conflicts, unreachable regions, or surprising dependencies. For example, it can check whether masking a set of dimensions leaves a decision unchanged; if the outcome flips, that mask might be hiding an implicit dependency worth auditing.

The combination of masks, adversarial bands, and theory stacks is what makes AGISystem2’s reasoning both flexible and explainable. You can change context by pushing or popping layers, you can adjust strictness by tuning radii, and you can inspect which dimensions and masks mattered for each step. Other documentation chapters—especially the Bias & Values and Explainability guides, together with the algorithm-focused entries linked from the Quick Wiki—provide complementary perspectives on the same machinery. This chapter focuses on the story: points move, regions shift, and layers overlay, but all under deterministic control with a clear paper trail.

Text & grammar Encoded vector TheoryStack + layers Retriever + LSH MathEngine (L1) Bands & provenance

This diagram groups the main reasoning steps: normalising and encoding text, composing concepts under the active theory stack, proposing candidates with Retriever and LSH, measuring masked L1 distances with MathEngine, interpreting results through adversarial bands and value layers, and finally recording provenance. It replaces the enumerated ASCII flow with a visual overview that mirrors the narrative in this chapter.

Because every stage of this flow is deterministic and uses explicit data structures, reasoning steps can be replayed, inspected, and compared across configurations or versions. That determinism underpins the explainability guarantees described in the dedicated Explainability chapter and validated by the engine’s internal test suites.