Research Article MRP Foundations

Scientific Inquiry as a Succession of Interpretive Regimes

Moving beyond monolithic models of scientific reasoning.

Focus: Epistemic Stratification MRP Theory
Permalink & citation

Introduction

Scientific research is often romanticized as a single, fluid process of discovery. However, for the purposes of automation and rigorous assistance, this monolithic view is counterproductive. Drawing on Meta-Rational Pragmatics (MRP), we argue that scientific inquiry is more accurately described as an organized succession of local interpretive regimes. Each regime stabilizes a particular relation among admissible objects, permitted transformations, and acceptable evidence. Understanding this stratification is essential for building research copilots that are more than just fluent text generators.

The Structure of Local Discipline

What counts as "rigor" changes as a researcher moves from exploratory data analysis to formal hypothesis testing. A pattern discovered under exploratory criteria is not yet a causal explanation; a plausible synthesis of papers is not yet a validated conclusion. In each stage, the researcher operates under a different set of local constraints—a "local discipline."

Computational assistance often fails when it ignores these boundaries. Retrieval-augmented systems can improve synthesis [ASAI-2026], but they struggle with verification [SON-ETAL-2025] because the standards for "good synthesis" and "good verification" are epistemically distinct. A regime-sensitive approach acknowledges that different parts of scientific work are governed by different local standards, and that architectural separation is therefore epistemically appropriate.

A Taxonomy of Interpretive Regimes

To make the concept of interpretive regimes operational, we propose a taxonomy of bounded modes of inquiry, each with its own function and validation criteria:

  • Observational: Converts phenomena or documents into usable evidence structures.
  • Exploratory: Searches for patterns, anomalies, and promising directions.
  • Model-constructive: Proposes mechanisms and representational structures.
  • Deductive-formal: Derives consequences and checks consistency.
  • Experimental-interventional: Selects actions that discriminate among candidates.
  • Critical-delimitative: Identifies artifacts and limits of validity.

A single research project may move through many of these regimes, often iteratively. The role of the scientist—and potentially the copilot—is to manage the transitions between them, ensuring that the artifacts produced in one regime remain valid when exported to another.

Operational Stratification

The proposal that scientific reasoning is operationally stratified has direct consequences for AI systems design. If we treat science as a sequence of regimes, we can assign different computational mechanisms to different epistemic burdens. Large language models (LLMs) appear highly effective in heuristic exploration and synthesis [GOTTWEIS-ETAL-2025], while formal solvers and causal inference engines are better suited for deductive and interventional work [ZHANG-ETAL-2024].

This stratification allows for a more restrained and implementable direction for scientific AI. Instead of aiming for a general "artificial scientist," we can build systems that are experts at identifying the current regime and mobilizing the appropriate domain-specific background knowledge and validation tools.

Conclusion

Scientific reliability is not a monolithic property but a distributed one, enforced locally within each interpretive regime. By making this stratification explicit, the MRP framework provides a principled way to organize the various tools and methods of automated science. The future of research copilots lies not in universal intelligence, but in regime-sensitive orchestration.

References

  • [ASAI-2026] Asai, Akari; et al. Synthesizing scientific literature with retrieval-augmented language models. Nature. 2026.
  • [GOTTWEIS-ETAL-2025] Gottweis, Juraj; et al. Towards an AI co-scientist. arXiv. 2025.
  • [SON-ETAL-2025] Son, Guijin; et al. When AI Co-Scientists Fail: SPOT-a Benchmark for Automated Verification. arXiv. 2025.
  • [ZHANG-ETAL-2024] Zhang, Zhaozhi; et al. Adaptive Causal Experimental Design. OpenReview. 2024.