This guide documents a first prototype of AGISystem2, a deterministic neuro-symbolic engine intended to act as a genuine “System 2” for intelligent agents. Instead of generating text directly, it thinks in geometry: concepts are regions, facts are points, and context is a stack of theory layers that can hold competing views without losing track of why. The long-term vision is that such an engine can supervise fast, connectionist “System 1” components—LLMs or other neural models—by providing rigorous checking, explanation, and meta-rational guidance on top of quick but fallible intuition.
In the near term, AGISystem2 is meant to be a “thinking database”: a system that stores knowledge not as loose text or opaque embeddings, but as structured regions in a conceptual space, and that can perform rigorous, reproducible reasoning over that space. It is designed to replace or augment classic RAG setups by offering a deterministic layer that interprets retrieved information, checks consistency, and explains why a claim is accepted or rejected. Instead of asking a language model to improvise logical steps over retrieved passages, you can ask AGISystem2 to compute whether a candidate fact lies inside the appropriate region under the right theories and to attach a geometric demonstration.
This makes the engine a natural fit for complex consistency checks. Legal and regulatory compliance, internal policy enforcement, and standard-operating-procedure validation all require that many small rules fit together without hidden contradictions. AGISystem2 can ingest rules and facts about regulations, contracts, or business processes, represent them as concepts and theory layers, and then determine whether a given scenario falls inside a permitted region, a forbidden region, or a conflict zone that needs escalation. Because facts and values live on separate dimensions, it can also compare how the same scenario behaves under different legal regimes or value systems.
Another direction is narrative and creative work. A language model can generate a rich story, but it may forget earlier details about characters, locations, or world rules. By encoding characters, places, and narrative constraints as concepts and theories, AGISystem2 can act as a consistency oracle for long-form works: checking that a character behaves in ways compatible with earlier descriptions, that magic or technology follows the world’s rules, or that structural expectations for a genre are met. The engine does not replace creativity, but it gives authors, tools, or agents a way to ask “Is this scene still coherent with everything we have established so far?”.
In the longer term, the same machinery can encode scientific domains in a formal yet interpretable way. Ontological dimensions capture scientific quantities and structures, axiological dimensions capture methodological or ethical preferences, and theory layers capture particular models or schools of thought. Scientific articles, experimental setups, and results can be turned into points and regions in this space. The ambition is that AGISystem2 can act as a consistency checker for scientific claims: highlighting where a new paper fits existing theories, where it contradicts them, and which assumptions must change for the claims to hold. In that sense it moves a step toward “turning science into code” by giving scientific narratives a precise geometric counterpart.
If you want to understand the theory behind the engine, start with the Conceptual Spaces chapter. It introduces the idea that concepts are regions, facts are points, and reasoning is movement within a high-dimensional space, and it explains the split between ontological and axiological dimensions. The companion chapter on Reasoning shows how deduction, induction, abduction, analogy, counterfactuals, temporal reasoning, and deontic reasoning all become geometric operations. For a deeper look at fairness and explanation, the Bias & Values and Explainability chapters describe how masks, bands, and theory layers support audits and proofs.
If you are more interested in how the engine is built, the Architecture section provides a layered overview of the main modules and their responsibilities, from low-level vector operations up to theory stacking and validation. The reference pages on Empirical Space, Ontology Dimensions, Axiology Dimensions, and Ingestion explain how the global vector space is structured and how statements are encoded into points and regions. The Configuration page explains profiles (auto_test, manual_test, prod) and the trade-offs they make between precision, recall, and resource usage.
If you are integrating AGISystem2 into an agent or application, the main practical entry points are the APIs page, which describes the high-level methods on EngineAPI and on agentic sessions; the Syntax page, which gives the exact shapes of facts, questions, theories, and macros; and the CLI chapter, which introduces the small command-line tool you can run in any working directory and which keeps its local state in a .AGISystem2 folder. When you encounter unfamiliar terms or acronyms, the Wiki provides short definitions and links back into the relevant theory chapters.
Beyond these core topics, there are chapters that describe learning dynamics, limits, and future directions. The Learning chapter explains how concepts change over time through clustering, splitting, and merging of bounded diamonds, and how temporal memory interacts with long-term storage. The Limits chapter is explicit about what the engine does not try to do and about the ways geometric representations can fail. The Roadmap chapter sketches research directions such as richer temporal logics, deeper integration with external knowledge bases, and tighter coupling between AGISystem2 and generative “System 1” models.
If you are new to the project, reading Conceptual Spaces, Reasoning, and Architecture will give you a solid mental model. If you are building on the engine, the APIs, Syntax, and CLI pages are your main references. If you are auditing or working in sensitive domains, the Bias & Values, Explainability, and Limits chapters are the most relevant starting points. Throughout the documentation, Quick Wiki remains the compact index of concepts that ties these threads together.