Executive Summary
AGISystem2 starts from a suspicion that has grown steadily stronger in our work: the future of AI will not be secured by scaling large models. It will depend on whether intelligence can be made more structured, more inspectable, more efficient, and more governable.
A useful metaphor comes from Daniel Kahneman's Thinking, Fast and Slow. In human psychology, System 1 is fast, intuitive, associative, and generative, while System 2 is more deliberate, controlled, and reflective. These are human categories, not computational ones, and the analogy should not be taken too literally. Even so, it reveals something important. Large language models often behave like a computational analogue of System 1: powerful, flexible, and remarkably useful, yet not sufficient on their own for stable reasoning, explicit validation, or disciplined execution. What AI now seems to need is a stronger functional equivalent of System 2, though not one that inherits human slowness or cognitive weakness. A machine-level System 2 can be faster, more modular, and more auditable, even if it will still face limits of its own.
That intuition became sharper as we worked on agent systems, controlled execution, enterprise workflows, and reusable agent skills. Again and again, the strongest systems were not those in which language models improvised everything. They were the ones in which decomposition became clearer, execution became more structured, and intermediate artefacts could be inspected and validated. What first looked like an implementation detail began to look like a missing layer in AI.
That is the opening from which Meta-Rational Pragmatics (MRP) emerged. MRP names the layer in which interpretation is governed, subproblems are made tractable, routes between different forms of computation are selected explicitly, and learned capabilities are reorganized into more stable and reusable structures.
The technical project we are building from this direction is MRP-VM. It is not a classical virtual machine with a rigid instruction set. It is a runtime for goals, frames, routes, plugins, and auditable execution. Its purpose is to move part of intelligence out of opaque model inference and into a form that is easier to control, cheaper to run, and more realistic to deploy.
In this picture, LLMs still matter, especially smaller models that can be fine-tuned and embedded efficiently. But they are not the destination. They are components in a broader neuro-symbolic architecture that includes constrained language, symbolic procedures, compact learned modules, memory structures, validation layers, and CPU-friendly execution paths.
This site is where we develop that argument and test its technical consequences. It is a space for a different question about AI: not only how capable models can become, but how intelligence itself might be reorganized into systems that are more dependable, more local, and more computationally serious.
Where To Start
These are the main entry points into the current program.
-
Primary Article Series
Meta Rational Pragmatics
The central article series for the site, with MRP-VM, governed interpretation, regime selection, executable natural language, and runtime architecture as the main thread.
Main conceptual and architectural track. Published March 19, 2026. -
Research Map
Research Themes
A structured map of the technical families around the program: verification, symbolic methods, VSA/HDC, world models, small models, CPU-oriented ML, and architectural alternatives.
Adjacency map for the broader research agenda. Published March 19, 2026. -
Method Article
Toward a Practical System 2 for AI-Assisted Research
Principles, failure modes, and evaluation signals for using AI in serious research workflows without collapsing rigor into surface fluency.
Methodological companion for research automation and reviewer-grade control. Published March 19, 2026.
Research Direction
The primary build focus is the construction and refinement of MRP-VM as a real runtime for reliable AI. That includes explicit decomposition, skill contracts, typed intermediates, validators, bounded workflows, and routing across symbolic, learned, retrieval, and hybrid execution paths. In practical terms, it is an attempt to make AI easier to govern, easier to evaluate, and easier to deploy outside hyperscale-only environments.
This matters directly for enterprise settings, including regulated environments such as pharma, where auditability and bounded behavior are not optional. It also matters for research automation, because the same runtime discipline that improves agent reliability can support literature work, synthesis, protocol handling, and structured inquiry under stronger epistemic control.
The broader horizon is not abstract futurism. It is a concrete path toward AI that can increasingly run on ordinary hardware, remain legible under scrutiny, and carry more of its intelligence in reusable structure rather than permanently expensive inference.
Experiments
Public experiments and open references that support the broader direction.
-
Executable Language
Executable CNL
Executable programming language based on constrained natural language syntax.
-
CPU-First Learning
VSABrains
Experiment suite for discrete, CPU-first learning inspired by A Thousand Brains.
-
Representations
VSA Representations
Experiments with VSA/HDC representations and their tradeoffs.
-
Text Reasoning
VSAText
Experiments on using VSA/HDC to analyze large volumes of text with partial semantic structure.
-
Structured Manipulation
VSAVM
VSA plus symbolic virtual machine for learning and structured manipulation.
-
Learning Systems
SomaVM
Learning system experiment based on a VM that simulates pleasure and pain as reward.
-
Reasoning Engine
HDC-RE
Reasoning engine built on hyperdimensional representations.
-
Prediction Systems
BSP
Bitset System for Prediction: CPU-friendly continuous learning without transformers.
-
Kernel and CNL
UBHNL
Universal Boolean Hypergraph kernel with a constrained natural language and DSL front-end.
-
Explainable Geometry
Spock
Deterministic neuro-symbolic geometry for explainable reasoning.