What is AGISystem2?

AGISystem2 is an AI research platform inspired by the System 2 concept from cognitive psychology—the slow, deliberate, logical mode of human reasoning (as opposed to System 1's fast, intuitive responses). Born from European research initiatives, the platform brings together formal verification, neural-symbolic hybrid reasoning, and hyperdimensional computing.

A key innovation is the introduction of a semantic programming language (DSL) that gives formal, machine-verifiable meaning to structured fragments of language. Instead of teaching humans to write in formal logic, we created a language that captures the semantic structure implicit in common reasoning patterns. The result is machine-checkable reasoning over formally modeled content (with proofs/traces and deterministic evaluation coverage).

The Vision: Universal Theory Engine (UTE)

AGISystem2 is driven by a research hypothesis: a plausible route toward “real” AGI goes through the ability to represent and manipulate theories as computational objects—compositional, queryable, checkable, and revisable.

We call this target Universal Theory Engine (UTE): a platform where scientific and operational knowledge can be encoded as executable theories, verified by proofs/traces, revised under new evidence, and used to plan experiments. This connects directly to the project name: “System 2” as deliberate, auditable reasoning over explicit models.

The practical goal is not a single monolithic super-agent. It is a family of domain-specialized agents that can review, argue, prove, and automate daily scientific work—with auditability. We are still far from that end state. Getting there likely requires a community effort: LLMs can propose, but formal models and evaluation must judge.

UTE roadmap Capability map + DS32–DS38 research specs Capacity research Elastic/dynamic-size HDC and saturation experiments Agentic loops AutoDiscovery-style synthesis + eval as judge Research hub Benchmarks, saturation, directions, snapshots

What You Can Do With It (Today)

Build checkable micro-theories

Encode structured facts and rules in a DSL, query/prove them, and get explainable traces. This is the “unit of progress” toward UTE: small theories that are executable, inspectable, and testable.

Compare reasoning substrates

Explore multiple HDC/VSA strategies under one contract (dense/sparse/metric/EMA/EXACT), and study scaling behavior. This is essential for capacity, decoding, and fast retrieval research.

Run reproducible experiments

Benchmarks and evaluation suites make progress measurable (correctness + performance) across strategies and reasoning priorities. Capacity experiments (saturation) stress-test “theory as vector” representations.

Use LLMs safely (as proposers)

LLMs can translate, propose patches, and propose theory fragments. AGISystem2’s job is to validate and harden those proposals with eval data, proofs/traces, and reproducible workflows.

Start with Getting started and the Research hub. For the UTE roadmap and why we think it matters, see Universal Theory Engine (UTE).

Active Research: AGISystem2 is an open-source research project. We're continuously expanding coverage of natural language patterns and improving reasoning capabilities. The system works well for structured domains; open-ended conversation is outside scope.

Why This Matters

For applications where verification is critical—compliance, medical decisions, legal reasoning, safety-critical systems—probabilistic pattern matching alone isn't enough. AGISystem2 provides:

Capability Comparison (UTE Lens)

Approach Strength Gap (toward UTE) Why AGISystem2 focuses on HDC + DSL
SMT / Proof engines (e.g. Z3) Strong correctness guarantees for well-defined constraints Hard to connect to open-world domain knowledge, retrieval, and “meaningful” NL pragmatics We keep proofs/traces, but add a theory substrate and retrieval layer so knowledge is modular and scalable (UTE).
Logic engines (Prolog/Datalog) Elegant rule execution and symbolic unification Weak on similarity-based retrieval, noisy superposition, and fast approximate candidate generation We unify symbolic reasoning with a VSA/HDC substrate for retrieval and capacity experiments (Holographic directions).
Knowledge graphs + embeddings Great at scalable search and link discovery Weak at executable semantics, proofs, contradiction handling, and model revision AGISystem2 treats theories as executable objects (DSL + reasoning) and uses HDC strategies as a controlled substrate to study retrieval + decoding (Saturation).
LLM-only “reasoning” Fast synthesis, broad language coverage, can propose solutions and code Difficult to verify; tends to mix retrieval and inference in ways that are not auditable We use LLMs as proposers, and keep deterministic evaluation + traces as the judge (agentic loops).
AGISystem2 Executable DSL + symbolic proofs/traces + multi-strategy HDC experiments UTE is a long-horizon roadmap: evidence, causality, uncertainty, numeric models, revision, experiment planning We are building the “Linux-like” research platform: modular theories, reproducible evaluation, swappable HDC geometries/strategies (Research hub).

AGISystem2 is intentionally a hybrid: symbolic semantics for correctness, and HDC/VSA strategies for retrieval and capacity research. See HRR comparison and HDC strategy directions.

Scope & Honesty

What AGISystem2 does well:

What AGISystem2 does NOT do:

Fundamental limitation: The system reasons correctly within a theory, but cannot guarantee the theory itself is correct. See The Symbol Grounding Problem for an honest discussion of this epistemological limitation.

🚀 Quick Start

Choose your path:

📖 Getting Started Install, configure, first query 🔤 NL2DSL Translation Natural language to DSL 🤖 LLM Integration Combine with Claude/GPT ⚖️ vs Other Tech Prolog, Datalog, Wolfram...

Technical Capabilities (Implemented)

Multi-Strategy HDC

Five pluggable vector representations: dense-binary, sparse-polynomial, metric-affine, metric-affine-elastic, exact. Learn more

Transitive Inference

Automatic reasoning across IS_A hierarchies and other transitive relations.

Backward Chaining

Rule-based reasoning with And/Or conditions and full proof traces.

Contradiction Detection

Automatic identification of inconsistencies in knowledge bases.

For details, see Specifications and the Research hub (benchmarks, saturation, and strategy directions).

Documentation Sections

Use Cases

Practical applications and research patterns: compliance (research), agent planning (research), scientific reasoning support. Includes when NOT to use AGISystem2.

Reasoning Engines

Two reasoning approaches: Symbolic (classical logic with HDC acceleration) and Holographic (HDC-first with symbolic validation).

Architecture

System layers: HDC core, runtime (Session, Executor), reasoning (Query, Prove), parser, and output generation.

DSL Syntax

Language reference: statements, operators, macros, theories, and Core theory definitions.

APIs

Session API: learn(), query(), prove(), generateText(). HDC Facade: bind(), bundle(), similarity().

Specifications

Design specifications (DS series), evaluation suites, and test coverage matrix.

Research

Reproducible experiments: multi-strategy comparisons, saturation/capacity evaluation, and research roadmaps (UTE, dynamic size, agentic loops). Start at Research Overview.

Wiki

Terminology, concepts, mathematical foundations. Quick reference for HDC, similarity measures, algebraic properties.

A Long-Term Bet: Domain Semantic Libraries

UTE will not emerge from a single giant model. It will likely be built as a growing ecosystem of domain libraries: executable theories, evaluation suites, and evidence conventions. This is hard work and it needs a community.

If you want to contribute: build a small theory + tests, not a marketing claim. That’s how UTE gets real.

Where We Are Today

✓ What Works Now

  • Deterministic reasoning over formalized theories (query/prove + traces)
  • Multi-strategy HDC (dense/sparse/metric/EMA/EXACT) under one contract
  • Evaluation harness for reproducible regressions (FastEval + unit tests)
  • Capacity experiments (Saturation Evaluation) for KB superposition stress tests
  • AutoDiscovery-style workflows for systematic bug discovery and hardening

⚡ Active Development

  • Broader DSL coverage and richer Core semantic libraries
  • Reasoning scalability (strategy geometry, dynamic size, and decoding/cleanup)
  • More robust proofs and contradiction workflows
  • Higher-level Session APIs (proposed)
  • Agentic loops that propose patches/theories and validate with eval data

🔭 Future Vision

  • Domain-specialized “scientist agents” that can review, prove, and argue
  • UTE capabilities: evidence-aware revision + mechanistic models + experiments
  • Probabilistic + numeric theory layers (with proof-grade traces)
  • Organized theory acquisition workflows (LLMs + evals as scientific loop)
  • Interoperability with external KBs/graphs (as sources, not as truth)

Philosophy: Theories as Tools

AGISystem2 adopts a pragmatic stance: theories are tools, not truths. The system guarantees correct reasoning within a theory, not the correctness of the theory itself. This isn't a limitation—it's how all formal systems work, from mathematics to legal codes.

The value is practical: transform natural language specifications into formal theories that can be analyzed, debugged, and revised. The consequences of your knowledge become computable, verifiable, and auditable.

Read about epistemological foundations → · Compare with other approaches →

Site Map

Getting Started

Architecture

Reasoning Engines

Foundational Theory

HDC Strategies

Trustworthy AI

Research

Wiki & Concepts