This guide compares the two HDC strategies available in AGISystem2, helping you choose the right approach for your application.
Architecture Overview
Dense Binary Strategy
Traditional HDC using fixed-dimensional vectors and bitwise operations.
- Representation: Uint32Array with fixed geometry (e.g., 32,768 bits)
- Binding: Bitwise XOR
- Bundle: Majority vote per bit
- Similarity: 1 - (Hamming distance / geometry)
Properties
- ✅ XOR cancellation: bind(bind(a,b), b) = a
- ✅ Perfect associative: bind(bind(a,b), c) = bind(a, bind(b,c))
- ✅ Perfect commutative: bind(a,b) = bind(b,a)
- ✅ Fast operations: O(n/32) complexity
- ❌ Fixed dimensionality limits scalability
Performance
- Binding: ~255K ops/sec
- Similarity: ~145K ops/sec
- Bundle: ~948 ops/sec
- Memory: 256 bytes per vector
Sparse Polynomial HDC (SPHDC)
Sparse HDC using k BigInt exponents with Cartesian XOR binding.
- Representation: Set of k 64-bit integers (default k=4, 32 bytes)
- Binding: Cartesian XOR with Min-Hash sparsification
- Bundle: Set union with Min-Hash sampling
- Similarity: Jaccard Index (|A ∩ B| / |A ∪ B|)
Properties
- ✅ 8x smaller memory footprint
- ✅ XOR cancellation: bind(bind(a,b), b) = a (XOR property)
- ✅ Perfect commutative: bind(a,b) = bind(b,a)
- ✅ 1.5x faster than Dense Binary
- ✅ Memory efficient: 32 bytes per vector (k=4)
Performance
- Binding: O(k²) = O(16) XOR ops for k=4
- Similarity: O(k) with sorted sets
- Bundle: Set union + sparsification
- Memory: 32 bytes per vector (k=4)
Mathematical Properties Comparison
| Property |
Dense Binary |
SPHDC |
Impact |
| Cancellation (XOR-based) |
Perfect (sim = 1.0) |
Perfect (XOR property) |
Both strategies support XOR cancellation |
| Associative |
Perfect (sim = 1.0) |
Perfect (XOR property) |
Both work perfectly |
| Commutative |
Perfect (sim = 1.0) |
Perfect (sim = 1.0) |
Preserved by deterministic sorting |
| Reflexive |
Perfect (sim = 1.0) |
Perfect (sim = 1.0) |
Both work perfectly |
| Symmetric |
Perfect (sim = 1.0) |
Perfect (sim = 1.0) |
Both work perfectly |
Performance Comparison
Eval Suite Results
| Metric |
Dense Binary |
SPHDC (k=4) |
Winner |
| Pass Rate |
100% (126/126) |
100% (126/126) |
Tie |
| Total Time |
91ms |
60ms |
SPHDC (1.5x faster) |
| Memory/Vector |
256 bytes |
32 bytes |
SPHDC (8x smaller) |
| HDC Retrieval |
35% success |
0% success |
Dense (better retrieval) |
Use Case Recommendations
Choose Dense Binary when:
- Similarity-based retrieval is required (HDC Master Equation)
- Research/comparison baseline needed
- Standard HDC semantics required
- Backward compatibility is important
Choose SPHDC when:
- Pure symbolic reasoning (KB matching, transitive chains)
- Memory-constrained environments (8x smaller vectors)
- Maximum speed needed (1.5x faster)
- Large knowledge bases
- Deterministic, reproducible results required
Implementation Example
// Initialize with SPHDC strategy
import { initHDC } from './src/hdc/facade.mjs';
initHDC('sparse-polynomial');
// Create vectors
const sphdc = getStrategy('sparse-polynomial');
const vectorA = sphdc.createFromName('ConceptA', 4); // k=4
const vectorB = sphdc.createFromName('ConceptB', 4);
// Bind vectors (Cartesian XOR)
const bound = sphdc.bind(vectorA, vectorB);
// Calculate similarity (Jaccard Index)
const similarity = sphdc.similarity(vectorA, vectorB);
// Bundle multiple vectors
const bundled = sphdc.bundle([vectorA, vectorB, vectorC]);
Advanced Topics
Hybrid Strategies
Consider combining SPHDC and dense binary strategies:
- Use SPHDC for symbolic reasoning (faster, smaller)
- Use dense binary for similarity-based retrieval
- Implement adaptive strategy selection based on workload
k Parameter Tuning
SPHDC performance varies with k:
- k=4 (default): Best balance of speed and safety margin
- k=2 or k=1: Maximum speed for simple use cases
- k=8+: Not recommended (slower, no accuracy benefit)