Explainability in AI means more than generating plausible-sounding text. AGISystem2 provides actual proof traces—the real computational steps that led to a conclusion, not a post-hoc confabulation.
Unlike neural networks where decisions emerge from billions of opaque weights, AGISystem2's reasoning is inherently transparent:
Every inference corresponds to a named rule in the knowledge base. No implicit patterns learned from data.
Each reasoning step is an explicit HDC operation (BIND, UNBIND, SIMILARITY). The proof trace IS the computation.
Same input always produces same output. Explanations are reproducible and debuggable.
Knowledge is stored in DSL format readable by humans, not opaque weight matrices.
When AGISystem2 proves a goal, it generates a complete trace of reasoning steps:
Every element of this trace is verifiable:
Different stakeholders need different levels of detail:
Audience: System developers, debuggers
Content: Full proof trace with vector operations, similarity scores, binding/unbinding steps
Example:
prove(Animal(Spot)):
match(Dog(Spot), KB) → sim=0.97
apply_rule(Dog_IS_A_Animal, {?x=Spot})
→ Animal(Spot) [confidence=0.97, depth=2]
Audience: Domain specialists, knowledge engineers
Content: Key reasoning steps, rules applied, without HDC internals
Example: "Concluded Spot is an animal by applying the inheritance rule 'all dogs are animals' to the known fact that Spot is a dog."
Audience: Non-technical users
Content: Natural language summary of the conclusion
Example: "Spot is an animal because Spot is a dog, and all dogs are animals."
Audience: Compliance officers, legal reviewers
Content: Decision justification with rule citations
Example: "Decision: APPROVED. Basis: Rule KB.Ontology.001 (taxonomic inheritance) applied to established fact F.Dog.Spot."
Often the most useful explanation answers "Why X instead of Y?"
Understanding what would change an outcome:
AGISystem2 explanations can be evaluated on:
| Metric | Description | How HDC Achieves It |
|---|---|---|
| Fidelity | Explanation matches actual reasoning | Proof trace IS the computation |
| Completeness | All relevant steps included | Full trace available, summarizable |
| Consistency | Same query → same explanation | Deterministic operations |
| Verifiability | Claims can be checked | Rules and facts inspectable |
| Minimality | No irrelevant details | Proof traces are minimal paths |
Explainability enables systematic bias analysis:
See Bias Study for detailed methodology.
AGISystem2's explainability system:
+--------------------------------------------------+
| Explanation Generator |
| Transforms proof traces to target audience |
+--------------------------------------------------+
| Proof Trace Store |
| Complete record of all reasoning steps |
+--------------------------------------------------+
| Reasoning Engine |
| prove() / query() with trace logging |
+--------------------------------------------------+
| HDC Foundation |
| Deterministic, traceable operations |
+--------------------------------------------------+