Executable Language Systems as a First Neighbor
Executable natural language is one of the clearest neighboring areas because it makes visible the shift from fixed semantic compilation toward governed runtime interpretation. ACE and related controlled languages represent the formal-semantics-first tradition [FUCHS-SCHWITTER-1996] [FUCHS-ET-AL-2008]. LMQL, CoRE, ReAct, PAL, DSPy, AutoGen, and guided generation systems show different ways in which language-model runtimes can turn text into controlled behavior [BEURER-KELLNER-2022] [XU-2024] [YAO-2023] [GAO-2022] [DSPY-2023] [WU-2023] [WILLARD-LOUF-2023].
Their relevance for Meta-Rational Pragmatics is direct. They show that the central problem is no longer merely how to represent meaning, but how to govern the transition from textual flexibility to bounded execution. In other words, they occupy the frontier where executable pragmatics becomes unavoidable.
Engineered Intuition, RAG, and Thinking Databases
Large language models are the default practical analogue of fast intuition for current AI agents because they can synthesize heterogeneous cues immediately. But fluent output is not equivalent to epistemically controlled reasoning, and the literature on hallucination and framing bias makes that limitation hard to ignore [BENDER-2021]. Retrieval-augmented generation helps, but standard RAG remains a retrieve-then- compress pattern rather than a full account of governed intuition [LEWIS-2020] [GAO-2023].
Once an explicit IR exists, the knowledge base can support multiple specialized intuition engines rather than one generic retrieval pipeline. Some may be retrieval-like, some abstraction-oriented, some simulation-like, some case-based, some structurally constrained. HDC/VSA-style associative computation, abstract interpretation, symbolic execution, heuristic search, and analogical recall all become candidates for fast first-pass services as long as their outputs disclose coverage, epistemic status, and residual uncertainty [HDC-SURVEY-I] [HDC-SURVEY-II] [KANERVA-2009] [COUSOT-1977] [KING-1976].
This is the point of a thinking database. It is not only searched; it is queried through executable regimes that produce ranked contexts, conservative envelopes, candidate decompositions, and boundary cases. Classical RAG then becomes one member of a broader family of fast interpretive services over an explicit knowledge substrate.
Representation Families Worth Tracking
Several other research families matter because they offer different intermediate structures for future regime selection. Conceptual spaces provide geometric representations of concepts and relations, useful as a graded layer between raw language and rigid symbols [BECHBERGER-2017] [BECHBERGER-2018]. HDC and vector symbolic architectures offer algebraic, compositional, and scalable structured encodings that may support fast theory signatures or analogical recall [HDC-SURVEY-I] [HDC-SURVEY-II] [TORCHHD-2023].
Probabilistic graphical models and probabilistic programming remain relevant wherever uncertainty, partial dependence, and explicit belief updates matter [PGMPY] [PYRO]. Causal modeling matters wherever explanation and intervention must be kept distinct from correlation [PEARL-2009]. Knowledge graphs, tensor methods, and sheaf-like approaches matter because local/global structure is often itself the reason a problem should be routed to one regime rather than another [KG-SURVEY] [PYKEEN] [TENSOR-NET-SURVEY] [TENSORNETWORK-LIB] [TENPY] [SHEAF-SURVEY-2025] [PYSHEAF].
The point is not that one of these families will dominate everything. The point is that structural regime selection may require several intermediate representational media before exact symbolic formalization is either possible or desirable.
What This Means for Meta-Rational and Executable Pragmatics
Taken together, these adjacent directions clarify the horizon of executable pragmatics. Meta-Rational Pragmatics needs more than one solver and more than one model class. It needs a runtime capable of coordinating multiple representational and computational families under explicit policies of interpretation, escalation, and validation.
That is why related research matters here. It shows that the design space is already populated with viable partial answers: controlled natural languages, agent runtimes, guided generation, geometric concept layers, associative high-dimensional encodings, probabilistic and causal models, graph systems, and efficient execution substrates. The challenge is no longer to find one universal medium. It is to make plurality tractable and governable inside a disciplined runtime.
References
- [AGISYSTEM2] AGISystem2. (2026). Research Index: CPU-centric ML, VSA/HDC, Conceptual Spaces, and Formal Reasoning.
- [BECHBERGER-2017] Bechberger, L., & Kühnberger, K.-U. (2017). A Thorough Formalization of Conceptual Spaces.
- [BECHBERGER-2018] Bechberger, L., & Kühnberger, K.-U. (2018). Formal Ways for Measuring Relations between Concepts in Conceptual Spaces.
- [BENDER-2021] Bender, E. M., et al. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
- [BEURER-KELLNER-2022] Beurer-Kellner, L., et al. (2022). Prompting Is Programming: A Query Language for Large Language Models.
- [COUSOT-1977] Cousot, P., & Cousot, R. (1977). Abstract Interpretation.
- [DSPY-2023] Khattab, O., et al. (2023). DSPy.
- [FUCHS-SCHWITTER-1996] Fuchs, N. E., & Schwitter, R. (1996). Attempto Controlled English.
- [FUCHS-ET-AL-2008] Fuchs, N. E., Kaljurand, K., & Kuhn, T. (2008). Attempto Controlled English for Knowledge Representation.
- [GAO-2022] Gao, L., et al. (2022). PAL: Program-aided Language Models.
- [GAO-2023] Gao, Y., et al. (2023). Retrieval-Augmented Generation for Large Language Models: A Survey.
- [HDC-SURVEY-I] Kleyko, D., et al. (2021). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I.
- [HDC-SURVEY-II] Kleyko, D., et al. (2021). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part II.
- [KANERVA-2009] Kanerva, P. (2009). Hyperdimensional Computing.
- [KG-SURVEY] Ji, S., et al. (2020). A Survey on Knowledge Graphs.
- [KING-1976] King, J. C. (1976). Symbolic Execution and Program Testing.
- [LEWIS-2020] Lewis, P., et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.
- [PEARL-2009] Pearl, J. (2009). Causality.
- [PGMPY] pgmpy project. (2025). Probabilistic Graphical Models in Python.
- [PYSHEAF] PySheaf Repository. (2025). Sheaf Theoretical Data Analysis.
- [PYKEEN] PyKEEN Documentation. (2025). Knowledge Graph Embeddings Pipeline.
- [PYRO] Pyro Documentation. (2025). Deep Probabilistic Programming with PyTorch.
- [SHEAF-SURVEY-2025] Ayzenberg, A., et al. (2025). Sheaf Theory: From Deep Geometry to Deep Learning.
- [TENPY] TeNPy Project. (2025). TeNPy Documentation and Repository.
- [TENSOR-NET-SURVEY] Wang, M., et al. (2023). Tensor Networks Meet Neural Networks: A Survey.
- [TENSORNETWORK-LIB] Google. (2025). TensorNetwork: A Library for Tensor Network Manipulations.
- [TORCHHD-2023] Heddes, M., et al. (2023). Torchhd.
- [WILLARD-LOUF-2023] Willard, B., & Louf, R. (2023). Efficient Guided Generation for Large Language Models.
- [WU-2023] Wu, Q., et al. (2023). AutoGen.
- [XU-2024] Xu, S., et al. (2024). AIOS Compiler.
- [YAO-2023] Yao, S., et al. (2023). ReAct.