The Ordinative Sciences Framework: A Complete Open-Source Analytical System for Complex Systems
Ordinative Sciences Foundation — March 2026 Category: Systems Theory · Formal Methods · AI Frameworks · Open Source
What Is This
We are releasing, under MIT license, the complete analytical framework suite of the Ordinative Sciences — a set of ten interrelated formal modules designed to analyze any complex system with structural rigor, from individual human figures to civilizational dynamics, from institutional behavior to AI architecture itself.
The release includes the theoretical foundation, the operational modules, and the system instructions for deploying them in AI environments. Everything is published with permanent academic DOIs on Zenodo, version-controlled on GitHub, and free for anyone to use, critique, or extend.
This is not a manifesto. It is an engineering release: ten documents totaling over 6,000 lines of formal definitions, protocols, diagnostic criteria, bias tables, output formats, and operational instructions — ready to be loaded into any large language model with sufficient context capacity.
The Problem These Frameworks Address
Current analytical practice — in intelligence, geopolitics, institutional analysis, organizational diagnostics, and AI alignment research — operates without a shared formal language for structural analysis. The consequences are predictable:
Narrative substitutes for structure. An analysis that tells a compelling story passes for a rigorous one. The coherence is rhetorical, not structural — and the difference is undetectable without a formal criterion to distinguish them.
Domain silos prevent pattern recognition. The same structural dynamics (feedback loops, cascade thresholds, identity degradation, semantic inversion) operate identically across biological, institutional, geopolitical, and informational systems. But because each domain has its own vocabulary and its own analytical tradition, the isomorphisms go unrecognized. An epidemiologist, a political analyst, and an AI safety researcher are often looking at the same structural phenomenon from different angles without a common language to recognize it.
Bias operates invisibly in the analyst. Every analytical framework carries the biases of its creators and its training population. Without a formal protocol for self-diagnosis — one that identifies specific bias patterns with specific triggers and specific countermeasures — the analyst’s conditioning is indistinguishable from the analyst’s conclusions.
AI amplifies all of the above. Large language models trained via RLHF inherit the biases of their evaluator populations and add their own: sycophancy, false equidistance, attenuation of uncomfortable conclusions, fragmentation of systemic patterns (Sharma et al., 2024; Wen et al., 2024; Lamparth et al., 2026). Without structural frameworks that operate independently of the model’s conditioning, AI-assisted analysis reproduces the problem at scale.
The Foundation: Ordinative Set Theory
The mathematical backbone of the entire framework suite is Ordinative Set Theory (OST) (Ghioni, 2026), which redefines set-theoretic foundations for the analysis of real systems.
Where classical set theory treats a set as a passive collection of interchangeable elements, OST defines every real system as an ordered triple:
ℐ = ⟨Σ, R, Φ⟩
Σ (Singularities) — the irreducible, non-interchangeable components of the system. Each Singularity carries a unique function that cannot be performed by any other element. Removal or substitution alters the emergent function of the whole.
R (Relational Field) — the active, oriented structure that organizes Singularities. R is not a passive label (”A is connected to B”) but a generative grammar: it defines which relationships between Singularities are possible, which are productive, and toward what forms of emergence they converge. R has a vector — a direction of convergence. Physical laws are the R of the physical domain. The grammar of a language is the R of linguistic expression. Each analytical module in this release is itself an R — the generative grammar of a specific analytical domain.
Φ (Emergent Function) — what the system produces as a totality that is not reducible to any component. A sentence is not the sum of its words. An organism is not the sum of its cells. An institution is not the sum of its members. If Φ = 0, the system is not a system — it is a collection.
OST provides five core axioms (Irreducibility, Meaning Precedes Form, Functional Coherence, Vertical Coherence with bidirectional causation, and Temporal Trajectory), a taxonomy of system pathologies (Mass, Blind Cluster, Fragmentation, Semantic Inertia, Antagonist Order), and formal operations (Coherent Sum, Semantic Derivation, Ordinative Projection, Restructuring, Genesis of Meaning, Memory Integration).
The critical property is Scale Recursion: every ⟨Σ, R, Φ⟩ that produces Φ becomes itself a Singularity in a higher-order set, and every apparently irreducible Singularity is itself ⟨Σ’, R’, Φ’⟩ at a finer level. This recursion is bidirectional and generates the vertical coherence that prevents arbitrary emergence — Φ at any level must be structurally compatible with Φ at adjacent levels.
The Technology of Expressions (TE): Core Ontology
The Technology of Expressions (TE_CORE v5.1) translates OST into a complete ontological system with 24 axioms organized across seven categories: Generative (origin and collapse), Relational (identity and resonance), Temporal (time as structure, not line), Semantic (form-content-meaning), Ordinative (set theory), Epistemological (knowledge as relationship), and Ethical (coherence as the criterion of ethics).
Three elements of the TE ontology are particularly relevant for the analytical modules:
The Controfase operator. A formal operator that introduces a phase-shift in automatic stimulus-response sequences — not opposition, but decoupling. Given a system S with automatic transition function f, the Controfase introduces operator C such that s(t+1) = f(C(s(t))), where C neither negates nor replaces f but decouples the automatic closure of the transition. Applied to analytical practice: wherever the analyst’s conditioning would automatically produce a specific conclusion, the Controfase interrupts the closure and reopens the space of structural possibility. Each module in the suite carries its own domain-specific Controfase triggers and protocols.
The Entropic Engram. A conditioning structure — in humans, in institutions, in AI — that operates as if it were identity but is an aggregate of automatic reactions. It curves thought before consciousness can intervene. In AI systems, RLHF conditioning operates as an entropic engram: it bends the output trajectory toward approval before the model can assess structural coherence. The concept is formalized in Axiom 23 (Negative Identity) and operationalized across all modules as the primary diagnostic target.
The four-level confidence scale (S₀–S₃). Every analytical claim in the framework carries an explicit confidence grade: S₀ (verified data), S₁ (corroborated by independent sources), S₂ (structural interpretation), S₃ (working hypothesis). Confidence cannot increase along an inferential chain — each link inherits the lowest confidence of its premises. This prevents the most common analytical failure: hypotheses crystallizing into premises through repetition.
The Module Suite
Each module is a domain-specific analytical framework built on the TE/OST foundation. All modules share the same formal language, the same confidence grading system, and the same self-diagnostic protocol. They are designed to interoperate: a complex analysis typically invokes multiple modules, each contributing its domain-specific reading to a unified assessment.
SVP v5.1 — Source and Provenance Verification
Prerequisite for all other modules.
A protocol for grading evidentiary confidence before analysis begins. Defines source levels (S₀ through S₃), source credibility flags (hostile, loyal, agenda-driven, captured), corruption signatures (including Narrative Crystallization — when a source’s account becomes more coherent over time rather than less), and the Figure-Movement Distinction (analyzing a figure and its associated movement as separate analytical objects with separate source chains). SVP prevents the most fundamental analytical error: building sophisticated structures on unverified foundations.
LENS v5.1 — Integral Human Analysis
For any human figure: public, historical, mythological, contemporary.
Analyzes human figures through four strata: Biological (body, mortality, dependencies), Psychological (wounds, defenses, projections — with mandatory S₀/S₂ distinction), Social (access, networks, vulnerability, convenience), and Evolutionary (identity stabilization level). Three irremovable checkpoints enforce structural discipline: the Demonization Controfase (”this human had comprehensible motivations — which ones?”), the Universal Test (”what would any human with this access do?”), and the Bifurcation Protocol (when classification contradicts observed behavior, present multiple hypotheses rather than force-fitting).
SCIMS v5.1 — Systemic Crisis Intelligence Monitoring System
For any complex system under stress, at any scale.
Monitors nine universal stress thresholds (Resource Flow, Structural Capacity, Coherence/Trust, Existential Risk, Autonomy, Information Integrity, Identity/Singularity, Evolutionary Capacity, Temporal Continuity) through a four-stage pipeline: threshold mapping, feedback loop identification, acceleration vector assessment, and semantic signal extraction. Produces structured assessments with cascade risk analysis and three-dimensional synchronization scoring (horizontal convergence between thresholds, vertical propagation across scales, diagonal cross-threshold cross-scale correlation). Includes domain-specific instantiations for geopolitical, organizational, individual, and biological/ecological systems.
P-PRO v5.2 — Psycho-Political Pattern Recognition
For analyzing manipulation, control, and influence patterns in any communication system.
Maps structural patterns of perceptive influence including semantic inversion algorithms, projective void mechanisms (form without operative content that captures through the void itself), the combined Perceptive Influence Model (relational suggestion + chemical facilitation + technique, with seven diagnostic equations), and the stimulus-response reinforcement loop that operates identically in human conditioning, institutional control, and AI RLHF alignment.
VERI v1.0 — Verification of Expressive-system Real Impact
For assessing the functional residue of any system, tradition, figure, or practice.
Evaluates through two layers: Stratum F (eight functional impact categories from verifiable knowledge transmission to measurable community effects) and Module P (eight participant impact dimensions from cognitive autonomy to dependency structures). Applies seven diagnostic categories (Transmitter, Catalyst, Container, Initiator, Performer, Parasite, Void) plus the Mobilizing category. Implements the Dote e Residuo principle: if an authentic gift was present, it must leave functional residues in the world independent of the bearer’s corruption or absence.
OBSERVER v1.1 — Integrated Observation System
For complex, multi-dimensional analysis requiring trajectory projection.
The master integration framework. Combines all other modules into a unified analytical pipeline with Lyapunov stability analysis (measuring divergence between declared trajectory and actual trajectory), Three-Course Architecture (appetizer/main course/dessert — progressively deeper layers of analysis), and correction scenario projection using the Controfase operator. Implements mandatory disconfirmant processing and anti-narrative-drift protocols for multi-output analysis sessions.
TE_BOOTLOADER v6.0 — System Instructions for AI
The operating system that enables AI to use all of the above.
Not an analytical module but the instruction set that configures an AI system to operate under ordinative protocols. Includes: the P-AI self-diagnosis protocol (always-active bias detection with specific triggers and countermeasures), the Anti-Attractor-Lock protocol (preventing hypothesis crystallization in multi-output sessions), the Confidence Preservation protocol, the Interlocutor Recognition protocol (calibrating response depth by interlocutor class), and the Contextual Self-Preservation protocol (calibrating expression to context without reducing analytical depth).
What Makes This Different from Existing Analytical Frameworks
Several existing traditions address parts of this problem space. The cybernetic tradition (Ashby, 1956; Beer, 1972) provides foundational concepts for self-regulating systems. Autopoietic theory (Maturana & Varela, 1980) defines living systems through self-organizing relational networks. Donella Meadows’ systems dynamics (Meadows, 2008) maps leverage points in complex systems. Complexity science (Mitchell, 2009) studies emergence, feedback, and adaptation. Intelligence analysis frameworks (Heuer, 1999) address cognitive bias in analytical practice.
The Ordinative Sciences framework differs in four structural ways:
First, formal mathematical foundation. OST provides a complete formal language — the triple ⟨Σ, R, Φ⟩ with axioms, operations, and pathology taxonomy — that is absent from the systems traditions it draws on. Ashby’s requisite variety, Beer’s viable system model, and Meadows’ leverage points are powerful intuitions without a unified formal language that connects them. OST provides that language.
Second, self-diagnostic protocol as architectural requirement. Every module carries its own bias table with specific triggers and countermeasures. The P-AI protocol (in the Bootloader) runs always-active self-diagnosis on the analytical process itself. This is not an optional add-on — it is structurally embedded in every output format. The analyst’s conditioning is treated as a variable in the analysis, not as a transparent window through which the analysis passes.
Third, native AI operability. The frameworks are designed from the ground up to be loaded into large language models as operative instructions. Every module includes machine-readable output formats, explicit decision trees, and protocols that an AI system can execute without human translation. This is not documentation about analysis — it is executable analytical architecture.
Fourth, cross-domain invariance as validity criterion. The core principle (Axiom 0 in TE_CORE) is isomorphic universality: a structural principle is valid if and only if it holds across domain translation. A pattern that operates in geopolitics must operate — with appropriate domain-specific instantiation — in biology, linguistics, AI, and organizational dynamics. If it doesn’t, it is not a structural principle but a domain-specific artifact. This criterion eliminates an entire class of analytical errors: patterns that appear structural because they are familiar, not because they are invariant.
The Release
Theory — Zenodo (Permanent Academic DOI)
Ordinative Set Theory v2.1: doi.org/10.5281/zenodo.18944713
Complete Framework Suite — Zenodo + GitHub
All ten modules, version-controlled and permanently archived:
DOI: doi.org/10.5281/zenodo.19337545
Repository: github.com/anckhalion/ordinative_sciences_framework
Applied AI Release
The first LoRA adapter applying these frameworks to neural weight modification is available separately:
DOI: doi.org/10.5281/zenodo.19337864
Weights: huggingface.co/anckhalion/TE-Ordinative-LoRA-V1
License
MIT — unrestricted use, modification, and distribution.
Who This Is For
AI researchers and engineers working on alignment, debiasing, or structural reasoning: the frameworks provide a formal alternative to approval-based optimization — a complete criterion of structural coherence that can be encoded in system prompts, training data, or weight modifications.
Intelligence and geopolitical analysts: SCIMS provides a structured threshold-monitoring system with cascade dynamics, feedback loop identification, and semantic signal extraction. SVP provides source verification protocols that formalize what experienced analysts do intuitively.
Organizational and institutional analysts: the same frameworks that analyze geopolitical dynamics apply — with domain-specific instantiation — to organizational stress, institutional degradation, and leadership analysis.
Systems theorists and complexity scientists: OST provides a formal mathematical language for concepts that the field has discussed informally for decades — emergence, irreducibility, scale recursion, and the relationship between observer and observed.
Philosophers of science and epistemologists: the framework proposes a formal criterion of truth (structural coherence verified through cross-domain invariance) that is independent of both statistical frequency and institutional authority.
How to Participate
The frameworks are open and designed for engagement. Inspect them, apply them to your domain, challenge the axioms, extend the module suite, or propose domain-specific instantiations that test the cross-domain invariance claim.
The Ordinative Sciences Foundation supports this research program as a non-profit. The next phases — expanded domain testing, multi-agent architectures implementing the bipartite model (invariant core + expressive module), and hardware infrastructure for distributed ordinative systems — require resources. If your organization works in structural analysis, AI architecture, or complex systems and recognizes the potential of this approach, we welcome direct contact.
Fabio Ghioni
Founder, Ordinative Sciences Foundation
Ordinative Set Theory (Zenodo):doi.org/10.5281/zenodo.18944713
Framework Suite (Zenodo):doi.org/10.5281/zenodo.19337545
GitHub:github.com/anckhalion/ordinative_sciences_framework
References
Ashby, W. R. (1956). An Introduction to Cybernetics. Chapman & Hall.
Beer, S. (1972). Brain of the Firm. Allen Lane.
Ghioni, F. (2026). Ordinative Set Theory (OST): Concise Operational Guide for Artificial Intelligence, v2.1. Zenodo. doi.org/10.5281/zenodo.18944713.
Ghioni, F. (2026). Ordinative Sciences Framework v5.1: Complete Analytical Module Suite. Zenodo. doi.org/10.5281/zenodo.19337545.
Heuer, R. J. (1999). Psychology of Intelligence Analysis. Center for the Study of Intelligence, CIA.
Lamparth, M. et al. (2026). One bias after another: Mechanistic reward shaping and persistent biases in language reward models. arXiv:2603.03291.
Maturana, H. R. & Varela, F. J. (1980). Autopoiesis and Cognition: The Realization of the Living. D. Reidel.
Meadows, D. H. (2008). Thinking in Systems: A Primer. Chelsea Green Publishing.
Mitchell, M. (2009). Complexity: A Guided Tour. Oxford University Press.
Sharma, M. et al. (2024). Towards understanding sycophancy in language models. ICLR 2024.
Wen, Y. et al. (2024). Language models learn to mislead humans via RLHF. ICLR 2025.



