The Reasoning Layer performs inference operations over the knowledge stored in the Knowledge Layer. It transforms queries into geometric operations, retrieves relevant concepts, computes distances and inclusions, and produces results with confidence scores and full provenance traces. All reasoning modes—deduction, abduction, analogy, counterfactual, and temporal—are implemented as operations on bounded diamonds.
Specification: DS(/engine/reasoner) · DS(/engine/retriever) · DS(/engine/bias_controller)
The Reasoner is the central orchestrator for inference. It receives parsed queries, selects the appropriate reasoning mode, coordinates retrieval and distance computation, and assembles results with explanations.
Reasoner query processing. The Mode Selector determines the reasoning strategy. Retriever finds candidate concepts via LSH. MathEngine computes exact distances. TheoryStack provides context. Grading produces confidence scores. The final Result includes truth value and full provenance trace.
| Mode | Question Type | Geometric Operation |
|---|---|---|
| Deduction | "Is X a Y?" | Check if point X is inside diamond Y |
| Abduction | "What could cause Y?" | Find concepts whose CAUSES relation includes Y |
| Analogy | "X is to Y as ? is to Z" | Vector arithmetic: compute offset and apply |
| Counterfactual | "What if X were true?" | Push temporary layer, assert X, query, pop |
| Temporal | "Was X true before?" | Apply inverse rotation, check inclusion |
| Deontic | "Is X permitted?" | Check PERMITS/PROHIBITS relations in stack |
The most common reasoning operation. A query "Is Dog an Animal?" becomes: retrieve the Animal concept, encode Dog as a point, check if the point lies inside the Animal diamond(s).
function deductionCheck(subject, relation, object) {
// Get composed concept through theory stack
const objectConcept = theoryStack.compose(object);
if (!objectConcept) return { truth: 'UNKNOWN', reason: 'Object not found' };
// Encode subject with relation permutation
const subjectPoint = encodeWithRelation(subject, relation);
// Check inclusion in all diamonds
for (const diamond of objectConcept.diamonds) {
if (isInside(subjectPoint, diamond)) {
return {
truth: 'TRUE_CERTAIN',
confidence: computeConfidence(subjectPoint, diamond),
trace: [{ step: 'inclusion', diamond, result: 'inside' }]
};
}
}
// Check proximity for plausible answers
const nearestDist = nearestDistance(subjectPoint, objectConcept);
if (nearestDist < plausibilityThreshold) {
return {
truth: 'TRUE_PLAUSIBLE',
confidence: 1 - (nearestDist / plausibilityThreshold),
trace: [{ step: 'proximity', distance: nearestDist }]
};
}
return { truth: 'FALSE', confidence: 0.9, trace: [...] };
}
Results include confidence computed from two perspectives:
The final confidence is the overlap between bands, providing a robust estimate even with uncertain data.
The Retriever provides fast approximate lookup of relevant concepts. It uses Locality-Sensitive Hashing (LSH) to find candidates in sub-linear time, then passes candidates to MathEngine for exact distance computation.
LSH retrieval. The query vector is hashed through multiple hash functions. Each hash points to a bucket containing similar concepts. The union of buckets forms the candidate set for exact distance computation.
| Parameter | auto_test | manual_test | prod |
|---|---|---|---|
| Number of hash tables | 4 | 8 | 16 |
| Hashes per table | 8 | 12 | 16 |
| Bucket width | 20 | 15 | 10 |
| Max candidates | 50 | 100 | 200 |
The BiasController manages the separation between factual and value-laden reasoning. It applies masks to isolate ontology dimensions (facts) from axiology dimensions (values), enabling fairness modes and value-aware reasoning.
function applyFairnessMode(query, mode) {
const mask = query.mask.slice(); // Copy base mask
switch (mode) {
case 'facts_only':
// Zero out axiology range
for (let i = 256; i < 384; i++) mask[i] = 0;
break;
case 'values_only':
// Zero out ontology range
for (let i = 0; i < 256; i++) mask[i] = 0;
break;
case 'veil_of_ignorance':
// Zero out axiology + protected attributes
for (let i = 256; i < 384; i++) mask[i] = 0;
for (const dim of protectedDimensions) mask[dim] = 0;
break;
}
return { ...query, mask };
}
The ValidationEngine performs consistency checks on the knowledge base. It can detect:
function validateConsistency() {
const issues = [];
// Check for contradictions in each theory layer
for (const layer of theoryStack.layers) {
for (const [name, concept] of layer.concepts) {
const contradictions = findContradictions(concept);
issues.push(...contradictions);
}
}
// Check for cycles
const cycles = detectCycles(conceptGraph);
issues.push(...cycles);
return {
valid: issues.length === 0,
issues
};
}
TemporalMemory encodes time by rotating vectors. Each time step applies a fixed rotation, shifting dimensions cyclically. This allows the system to reason about past and future states without separate storage structures.
| Operation | Description |
|---|---|
tick() |
Advance time by one step (rotate current state) |
rewind(n) |
Apply n inverse rotations to query past |
forward(n) |
Apply n forward rotations to query future |
snapshot() |
Save current time state for later comparison |
Every reasoning result includes a provenance trace that records the steps taken to reach the conclusion. This trace enables:
{
truth: "TRUE_CERTAIN",
confidence: 0.92,
trace: [
{ step: "parse", input: "Is Dog an Animal?", output: { s: "Dog", r: "IS_A", o: "Animal" } },
{ step: "encode", subject: "Dog", relation: "IS_A", vectorHash: "a7b3..." },
{ step: "retrieve", candidates: ["Animal", "Mammal", "Creature"], method: "LSH" },
{ step: "compose", concept: "Animal", layers: ["base"], diamondCount: 1 },
{ step: "inclusion", point: "Dog", diamond: "Animal#0", distance: 12, radius: 50, result: "inside" },
{ step: "grade", sceptic: 0.89, optimist: 0.95, final: 0.92 }
]
}