AGISystem2 Research

The Algorithmic Revolution

Transitioning from Dense Matrix Multiplication to Probabilistic Sparsity.

1.1 The SLIDE Algorithm: Sub-Linear Deep Learning Engine

The SLIDE architecture, developed at Rice University, addresses the computational cost of neural network training by utilizing Locality Sensitive Hashing (LSH). It identifies active neurons for specific inputs without computing full layer activations.

1.2 ThirdAI and Dynamic Sparsity

ThirdAI commercialized these concepts through the BOLT engine. The objective is to enable training and fine-tuning of large-scale models directly on standard x86 and ARM CPUs.

1.3 Neural Magic: Sparsification and serving

Neural Magic, born from MIT, focuses on Sparsification, pruning networks by 80-90% without losing accuracy. Following their 2025 acquisition by Red Hat, they have pivoted to contributing these optimizations directly to vLLM via the nm-vllm project.

Alternative Sparse Methods