AGISystem2 is an AI research platform inspired by the System 2 concept from cognitive psychology—the slow, deliberate, logical mode of human reasoning (as opposed to System 1's fast, intuitive responses). Born from European research initiatives, the platform brings together formal verification, neural-symbolic hybrid reasoning, and hyperdimensional computing.
A key innovation is the introduction of a semantic programming language (DSL) that gives formal, machine-verifiable meaning to structured fragments of language. Instead of teaching humans to write in formal logic, we created a language that captures the semantic structure implicit in common reasoning patterns. The result is machine-checkable reasoning over formally modeled content (with proofs/traces and deterministic evaluation coverage).
AGISystem2 is driven by a research hypothesis: a plausible route toward “real” AGI goes through the ability to represent and manipulate theories as computational objects—compositional, queryable, checkable, and revisable.
We call this target Universal Theory Engine (UTE): a platform where scientific and operational knowledge can be encoded as executable theories, verified by proofs/traces, revised under new evidence, and used to plan experiments. This connects directly to the project name: “System 2” as deliberate, auditable reasoning over explicit models.
The practical goal is not a single monolithic super-agent. It is a family of domain-specialized agents that can review, argue, prove, and automate daily scientific work—with auditability. We are still far from that end state. Getting there likely requires a community effort: LLMs can propose, but formal models and evaluation must judge.
Encode structured facts and rules in a DSL, query/prove them, and get explainable traces. This is the “unit of progress” toward UTE: small theories that are executable, inspectable, and testable.
Explore multiple HDC/VSA strategies under one contract (dense/sparse/metric/EMA/EXACT), and study scaling behavior. This is essential for capacity, decoding, and fast retrieval research.
Benchmarks and evaluation suites make progress measurable (correctness + performance) across strategies and reasoning priorities. Capacity experiments (saturation) stress-test “theory as vector” representations.
LLMs can translate, propose patches, and propose theory fragments. AGISystem2’s job is to validate and harden those proposals with eval data, proofs/traces, and reproducible workflows.
Start with Getting started and the Research hub. For the UTE roadmap and why we think it matters, see Universal Theory Engine (UTE).
For applications where verification is critical—compliance, medical decisions, legal reasoning, safety-critical systems—probabilistic pattern matching alone isn't enough. AGISystem2 provides:
| Approach | Strength | Gap (toward UTE) | Why AGISystem2 focuses on HDC + DSL |
|---|---|---|---|
| SMT / Proof engines (e.g. Z3) | Strong correctness guarantees for well-defined constraints | Hard to connect to open-world domain knowledge, retrieval, and “meaningful” NL pragmatics | We keep proofs/traces, but add a theory substrate and retrieval layer so knowledge is modular and scalable (UTE). |
| Logic engines (Prolog/Datalog) | Elegant rule execution and symbolic unification | Weak on similarity-based retrieval, noisy superposition, and fast approximate candidate generation | We unify symbolic reasoning with a VSA/HDC substrate for retrieval and capacity experiments (Holographic directions). |
| Knowledge graphs + embeddings | Great at scalable search and link discovery | Weak at executable semantics, proofs, contradiction handling, and model revision | AGISystem2 treats theories as executable objects (DSL + reasoning) and uses HDC strategies as a controlled substrate to study retrieval + decoding (Saturation). |
| LLM-only “reasoning” | Fast synthesis, broad language coverage, can propose solutions and code | Difficult to verify; tends to mix retrieval and inference in ways that are not auditable | We use LLMs as proposers, and keep deterministic evaluation + traces as the judge (agentic loops). |
| AGISystem2 | Executable DSL + symbolic proofs/traces + multi-strategy HDC experiments | UTE is a long-horizon roadmap: evidence, causality, uncertainty, numeric models, revision, experiment planning | We are building the “Linux-like” research platform: modular theories, reproducible evaluation, swappable HDC geometries/strategies (Research hub). |
AGISystem2 is intentionally a hybrid: symbolic semantics for correctness, and HDC/VSA strategies for retrieval and capacity research. See HRR comparison and HDC strategy directions.
What AGISystem2 does well:
What AGISystem2 does NOT do:
Fundamental limitation: The system reasons correctly within a theory, but cannot guarantee the theory itself is correct. See The Symbol Grounding Problem for an honest discussion of this epistemological limitation.
Choose your path:
Five pluggable vector representations: dense-binary, sparse-polynomial, metric-affine, metric-affine-elastic, exact. Learn more
Automatic reasoning across IS_A hierarchies and other transitive relations.
Rule-based reasoning with And/Or conditions and full proof traces.
Automatic identification of inconsistencies in knowledge bases.
For details, see Specifications and the Research hub (benchmarks, saturation, and strategy directions).
Practical applications and research patterns: compliance (research), agent planning (research), scientific reasoning support. Includes when NOT to use AGISystem2.
Two reasoning approaches: Symbolic (classical logic with HDC acceleration) and Holographic (HDC-first with symbolic validation).
System layers: HDC core, runtime (Session, Executor), reasoning (Query, Prove), parser, and output generation.
HDC principles, trustworthy AI patterns, epistemological limitations, privacy analysis, NL2DSL translation, HRR comparison.
Language reference: statements, operators, macros, theories, and Core theory definitions.
Session API: learn(), query(), prove(), generateText(). HDC Facade: bind(), bundle(), similarity().
Design specifications (DS series), evaluation suites, and test coverage matrix.
Reproducible experiments: multi-strategy comparisons, saturation/capacity evaluation, and research roadmaps (UTE, dynamic size, agentic loops). Start at Research Overview.
Terminology, concepts, mathematical foundations. Quick reference for HDC, similarity measures, algebraic properties.
UTE will not emerge from a single giant model. It will likely be built as a growing ecosystem of domain libraries: executable theories, evaluation suites, and evidence conventions. This is hard work and it needs a community.
If you want to contribute: build a small theory + tests, not a marketing claim. That’s how UTE gets real.
AGISystem2 adopts a pragmatic stance: theories are tools, not truths. The system guarantees correct reasoning within a theory, not the correctness of the theory itself. This isn't a limitation—it's how all formal systems work, from mathematics to legal codes.
The value is practical: transform natural language specifications into formal theories that can be analyzed, debugged, and revised. The consequences of your knowledge become computable, verifiable, and auditable.
Read about epistemological foundations → · Compare with other approaches →