From Judgment Regimes to Regime Induction
Meta-rational pragmatics starts from the observation that, in many practically relevant artefacts, operational meaning is not exhausted by abstract denotation. A document, policy, specification, workflow, or knowledge source becomes actionable only relative to a judgment regime: what must be checked, what evidence is admissible, which uncertainties are acceptable, and what counts as acceptance, rejection, or deferred judgment in context [WITTGENSTEIN-1953] [BILETZKI-2023]. From this perspective, semantics is often better understood as a stabilized special case of constrained pragmatics rather than as its universal foundation. The architectural consequence is that an intelligent system should not assume that every task can be handled by one uniform reasoning style, one representational substrate, or one monolithic model. It should first determine the structural character of the local subproblem and only then decide how that subproblem is to be made computable.
This is the intended meaning of Structural Regime Selection and Tractable Computation. In its stronger form, however, the concept should be read not merely as runtime method selection, but as regime induction. The system must learn how to recognize what kind of subproblem it is facing, how to stabilize that subproblem into a bounded working object, and how to reduce it to an interpreter whose behavior is locally adequate, computationally feasible, and sufficiently auditable. Under this formulation, the target of learning changes. The system is not primarily trying to internalize one universal semantics of the world or compress all competence into one predictive engine. It is trying to acquire two higher-order capacities: the ability to detect structural regimes and the ability to map them to tractable interpreters.
Structural Regimes and the Meaning of Tractable Computation
A structural regime is not just a topic label. It is a hypothesis about the computational shape of the subproblem. Some subproblems are constraint-dominated and justify exact symbolic verification. Others are graph-structured and call for relational decomposition. Others are driven by graded similarity and are better handled geometrically or associatively. Others involve explicit uncertainty, local dependence, or latent state evolution and are better expressed probabilistically or through predictive models. Others are synthesis problems, where the relevant output is an executable procedure rather than a descriptive answer. This is why regime induction is more fundamental than simple tool choice. Before one can call a solver, a retrieval engine, a world model, or a compiler-like transform, one must first isolate the relevant entities, scopes, dependencies, invariants, and success criteria. Intelligence begins when the system succeeds in producing a bounded active theory frame within which solving has become meaningful.
Tractable computation, in this setting, does not mean only lower runtime. It means that the problem has been transformed into a form for which a reliable local interpreter exists at acceptable cost. Sometimes this happens through exact formalization, as when a fragment becomes explicit enough for satisfiability checking or symbolic verification. Sometimes it happens through approximation, pruning, or narrowing before exact reasoning is attempted. Sometimes it happens through change of representation, where an apparently hard problem becomes manageable because its structure has been recast in graph form, factorized into local dependencies, or encoded compositionally. This is why classical ideas such as SMT solving, abstract interpretation, symbolic execution, bounded graph and hypergraph structure, and tractable circuit structure remain relevant. Their value is not nostalgic. Their value is that they instantiate different ways in which local tractability can be obtained once the right regime has been identified [DEMOURA-2008] [COUSOT-1977] [KING-1976] [CADAR-2013] [GOTTLOB-2001] [BRAULTBARON-2016].
Skills, Compilation, and the Agentic Virtual Machine
The notion of agentic skills makes this architecture implementable in a realistic way. A skill should not be understood merely as a wrapper around a tool call. It is better defined as a bounded computational unit with explicit inputs, explicit outputs, declared assumptions, and limited side effects. In heavier execution modes, skills can behave like dependency-aware build steps. In more dynamic modes, the same skills can be assembled into short-lived plans and revised locally without forcing a global restart of the reasoning history. This design matters because regime induction needs stable local units on which structural decisions can act. Without explicit skills, the system remains a loose conversational flow. With explicit skills, it becomes possible to isolate subproblems, track dependencies, cache intermediate results, validate local behavior, and optimize recurring execution paths. This is also consistent with recent engineering guidance from major labs, which emphasizes simple composable agent patterns, explicit tools, reusable skills, and context engineering rather than unconstrained conversational autonomy [ANT-BEA-2024] [ANT-SKILLS-2025] [ANT-CTX-2025] [OAI-SKILLS-2026].
Once skills are treated this way, a well-structured agentic system becomes more than a prompt plus tools. It becomes a high-level source language. Its skills, prompts, dependency graph, routing logic, validation hooks, and fallback policies together define an executable program that can in principle be analyzed, optimized, and partially compiled. This gives regime induction a concrete computational interpretation. Over repeated executions, the system can discover that some skill chains correspond to stable structural regimes, that some prompt-mediated steps can be replaced by deterministic code, that some routing decisions can be delegated to smaller trained classifiers, and that some recurrent subproblems deserve specialized interpreters. In that sense, compilation is not merely low-level optimization. It is the progressive conversion of expensive, prompt-heavy, weakly structured agent behavior into stronger and more reliable local competence. The long-term target is therefore not a single giant reasoning engine, but an agentic virtual machine that preserves the common execution grammar of the original skill system while replacing fragile runtime choices with more efficient and more auditable interpreters.
The Staged Ecology of LLMs, ML, Software, and Local Substrates
This perspective also clarifies the respective roles of LLMs, classical ML, and classical software. LLMs remain useful, but mainly as high-level orchestrators for decomposition, schema induction, regime suggestion, interface translation, and partial synthesis. They are especially valuable when the artefact is messy and the first task is to move from informal structure to an explicit intermediate representation. At the same time, they remain weak as universal interpreters for exact reasoning, long-horizon consistency, and tractability-aware decomposition, which is one reason why current practical agent design emphasizes tools, context engineering, and structured interfaces [ANT-BEA-2024] [ANT-CTX-2025]. Classical ML has a more targeted role: learned routers, detectors, boundary estimators, regime classifiers, confidence models, and specialized predictors for recurring local tasks. Classical software remains central whenever a subproblem has already been sufficiently stabilized to justify exact or near-exact procedures, such as solvers, planners, validators, graph algorithms, or domain-specific code. The emerging architecture is therefore not “LLMs versus classical methods.” It is a staged ecology in which LLMs help expose structure, ML helps detect and specialize regimes, and classical software executes where explicit structure justifies stronger guarantees.
Within this architecture, world models, neuro-symbolic systems, probabilistic circuits, and vector-symbolic representations should be treated as candidate local substrates, not universal foundations. World models are appropriate when the subproblem genuinely involves latent state evolution, internal rollout, or action-conditioned consequence [HA-2018] [HAFNER-2023]. Neuro-symbolic methods are appropriate when explicit compositional structure, local proof traces, or symbolic constraints materially improve performance, traceability, or governance [GARCEZ-LAMB-2020]. Probabilistic circuits are relevant when exact and tractable probabilistic inference is achievable under known structural restrictions. Vector-symbolic or hyperdimensional representations are important because they offer a possible middle layer between dense neural approximation and rigid symbolic data structures: they support role-filler binding, compositional manipulation, and compact structured representations with relatively simple algebraic operations [HDC-SURVEY-I]. Their significance here is not that they replace deep learning, but that they may offer better local substrates for regimes where compositional structure must remain manipulable and computationally light.
Implementation Path, Risks, and the Central Claim
A realistic implementation path is therefore staged. A first generation can use current LLM-based agents mainly for decomposition, schema induction, and provisional regime suggestion, while delegating execution to symbolic, probabilistic, graph-based, simulation-based, or classical modules. A second generation can begin compiling stable skill chains into cached transforms, validators, graph routines, smaller routers, or domain-specific code. A third generation can allow the system to synthesize or train new local interpreters for recurring structural motifs. At that point, regime induction starts to converge with partial program synthesis and with learned specialization of execution paths. The important point is that this remains a pragmatic and controllable agenda. It does not require an immediate replacement of LLMs by one universal world model, nor does it assume that one representational family will dominate all regimes.
The main risks should also be stated explicitly. The first is premature compilation: transient workflow regularities may be frozen too early into rigid procedures. The second is level confusion: useful skill boundaries are not automatically the right semantic boundaries. The third is hidden fragility: some skills may appear explicit while still depending on ambiguous prompts or undocumented context conventions. The fourth is local optimization at the expense of global reliability or auditability. For this reason, compilation should be incremental, evidence-driven, versioned, reversible, and tied to tests, provenance, and rollback paths. The objective is not unconstrained self-modification. The objective is disciplined specialization under explicit control.
The central claim can therefore be stated in a concise form. A mature meta-rational system does not primarily learn one universal semantics of the world. It learns how to induce structural regimes and how to reduce local subproblems to tractable interpreters. Agentic skills provide the practical substrate for this transition because they supply bounded computational units, explicit operational contracts, and a common execution grammar. Structural Regime Selection and Tractable Computation should therefore be understood not as a peripheral heuristic, but as one of the main computational consequences of meta-rational pragmatics.
References
- [ANT-BEA-2024] Anthropic. (2024). Building Effective AI Agents.
- [ANT-CTX-2025] Anthropic. (2025). Effective Context Engineering for AI Agents.
- [ANT-SKILLS-2025] Anthropic. (2025). Equipping Agents for the Real World with Agent Skills.
- [BILETZKI-2023] Biletzki, A., & Matar, A. (2023). Ludwig Wittgenstein.
- [BRAULTBARON-2016] Brault-Baron, J. (2016). Hypergraph Acyclicity Revisited.
- [CADAR-2013] Cadar, C., & Sen, K. (2013). Symbolic Execution for Software Testing: Three Decades Later.
- [COUSOT-1977] Cousot, P., & Cousot, R. (1977). Abstract Interpretation.
- [DEMOURA-2008] de Moura, L., & Bjørner, N. (2008). Z3: An Efficient SMT Solver.
- [GARCEZ-LAMB-2020] d'Avila Garcez, A., & Lamb, L. C. (2020). Neurosymbolic AI: The 3rd Wave.
- [GOTTLOB-2001] Gottlob, G., et al. (2001). The Complexity of Acyclic Conjunctive Queries.
- [HA-2018] Ha, D., & Schmidhuber, J. (2018). World Models.
- [HAFNER-2023] Hafner, D., et al. (2023). Mastering Diverse Domains through World Models.
- [HDC-SURVEY-I] Kleyko, D., et al. (2021). A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I.
- [KING-1976] King, J. C. (1976). Symbolic Execution and Program Testing.
- [OAI-SKILLS-2026] OpenAI. (2026). Skills in Agents SDK.
- [WITTGENSTEIN-1953] Wittgenstein, L. (1953/2009). Philosophical Investigations.