Abduction is inference to the best explanation. Given an observation O, abduction finds hypotheses H such that if H were true, O would be explained. Unlike deduction (proving what must be true), abduction asks "what could explain this?"

Status: Advanced reasoning is partial (DS06). Abduction workflows are research-level and not fully implemented in the runtime.

1. Theoretical Foundation

The Three Modes of Inference:

Deduction: Rule + Case → Result "All men are mortal. Socrates is a man. ∴ Socrates is mortal."
Induction: Case + Result → Rule "Socrates is mortal. Socrates is a man. ∴ All men are mortal."
Abduction: Rule + Result → Case "All men are mortal. Socrates is mortal. ∴ Socrates is a man."

Formal Definition:

abduce(O, KB) = { H | KB ∪ {H} ⊢ O, KB ⊬ ¬H }

A hypothesis H is a valid explanation if:

Fallacy Warning: Abduction is not logically valid! "If it rains, the ground is wet" + "The ground is wet" does NOT prove it rained (sprinklers exist). Abduction finds possible explanations, not certain ones.

2. Abduction Process

Abduction Process Flow Observation O: "The grass is wet" Find rules where O is consequent H₁: It rained Rule: rain → wet grass H₂: Sprinklers on Rule: sprinklers → wet grass H₃: Morning dew Rule: cold night → dew P=0.85 P=0.60 P=0.40

3. Algorithm: Abductive Inference

Algorithm: abduce(observation, kb)
  1. Initialize: hypotheses = []
  2. Find explanatory rules:
    • For each rule R: (P → C) in KB
    • θ = unify(observation, C)
    • If θ ≠ FAIL: Add (P·θ, R) to candidates
  3. Check consistency:
    • For each candidate H:
    • If prove(Not(H), KB) fails (H is not contradicted)
    • Add H to hypotheses
  4. Score hypotheses:
    • Prior probability P(H)
    • Explanatory coverage (how much of O does H explain?)
    • Simplicity (prefer fewer assumptions)
  5. Rank: Sort by score descending
  6. Return: Ranked hypotheses with scores

4. Hypothesis Scoring

Bayesian Formulation:

P(H|O) = P(O|H) × P(H) / P(O)

The probability of hypothesis H given observation O depends on:

Scoring Factor Description Weight
Rule confidence Strength of the explanatory rule 0.35
Prior probability Base rate of hypothesis being true 0.25
Explanatory coverage Fraction of observations explained 0.25
Simplicity Fewer assumptions preferred (Occam) 0.15

5. Types of Abduction

Simple Abduction: Find a single hypothesis explaining one observation.
Observation: isA Socrates Mortal
Rule: @r1 Implies (isA ?x Human) (isA ?x Mortal)
Hypothesis: isA Socrates Human
Composite Abduction: Find a hypothesis explaining multiple observations.
Observations:
  - hasSymptom Patient123 Fever
  - hasSymptom Patient123 Cough
  - hasSymptom Patient123 Fatigue

Hypothesis: hasDiagnosis Patient123 Influenza
(Explains all three symptoms via known rules)
Discriminating Abduction: Find hypothesis that explains O₁ but not O₂.
Observations:
  - The grass is wet
  - The sidewalk is dry

H₁ (rain) would wet both → Rejected
H₂ (sprinklers) only wet grass → Accepted

6. Abduction in Practice

Application Domains Medical Diagnosis Symptoms → Disease • Fever, Cough • Fatigue → Flu? COVID? Pneumonia? Fault Diagnosis Symptoms → Cause • Car won't start • No lights → Dead battery? Bad alternator? Scientific Discovery Data → Theory • Planetary motion • Orbital periods → Gravity? Kepler's laws? Story Understanding Event → Reason • John went to store • Bought milk → Needed milk? Out of milk?

7. Handling Multiple Observations

Multi-Observation Abduction:
function abduceMultiple(observations, kb):
  // Find hypotheses that explain ALL observations
  candidateSets = []

  for each H that could explain observations[0]:
    remaining = observations.slice(1)
    if all remaining are explained by H or derivable from kb ∪ {H}:
      candidateSets.push({H})
    else:
      // Try combining hypotheses
      for each H2 that could explain remaining[0]:
        combined = {H, H2}
        if all observations explained by kb ∪ combined:
          if consistent(combined, kb):
            candidateSets.push(combined)

  // Rank by parsimony: prefer smaller sets
  return sortBySize(candidateSets)

8. Example: Diagnostic Reasoning

Knowledge Base:
@r1 Implies (hasDiagnosis ?p Flu) (hasSymptom ?p Fever)
@r2 Implies (hasDiagnosis ?p Flu) (hasSymptom ?p Cough)
@r3 Implies (hasDiagnosis ?p Flu) (hasSymptom ?p Fatigue)
@r4 Implies (hasDiagnosis ?p Cold) (hasSymptom ?p Cough)
@r5 Implies (hasDiagnosis ?p Cold) (hasSymptom ?p Sneezing)
@r6 Implies (hasDiagnosis ?p Allergy) (hasSymptom ?p Sneezing)
Observations:
hasSymptom Patient1 Fever
hasSymptom Patient1 Cough
hasSymptom Patient1 Fatigue
Abduction Result:
Hypotheses:
1. hasDiagnosis Patient1 Flu
   - Explains: Fever ✓, Cough ✓, Fatigue ✓
   - Coverage: 100%
   - Score: 0.92

2. And (hasDiagnosis Patient1 Cold) (unknown-cause Fever)
   - Explains: Cough ✓ (partial)
   - Coverage: 33%
   - Score: 0.35

9. Confidence and Uncertainty

Confidence Propagation:

confidence(H) = Σᵢ coverage(H, Oᵢ) × rule_strength(Rᵢ) × prior(H)

The confidence in a hypothesis depends on how well it explains observations, the strength of the rules used, and prior plausibility.

Confidence Level Interpretation Action
0.9 - 1.0 Strong explanation Accept hypothesis
0.7 - 0.9 Likely explanation Consider accepting
0.5 - 0.7 Possible explanation Seek more evidence
< 0.5 Weak explanation Explore alternatives

10. API Reference

import { AbductionEngine } from 'agisystem2/reasoning';

const engine = new AbductionEngine(session);

// Simple abduction
const hypotheses = await engine.abduce({
  operator: 'hasSymptom',
  args: ['Patient1', 'Fever']
});

// Multiple observations
const hypotheses = await engine.abduceMultiple([
  { operator: 'hasSymptom', args: ['Patient1', 'Fever'] },
  { operator: 'hasSymptom', args: ['Patient1', 'Cough'] }
]);

// With options
const hypotheses = await engine.abduce(observation, {
  maxHypotheses: 5,        // Limit results
  minConfidence: 0.5,      // Filter weak hypotheses
  requireCoverage: 0.8,    // Minimum explanation coverage
  checkConsistency: true   // Verify consistency with KB
});

11. Related Documentation