Not opinions. Data. Every claim quantified. Every dataset downloadable. Every analysis reproducible. If you can't verify it yourself, we haven't demonstrated it.
The academic publishing industry generates $19 billion annually by placing publicly funded research behind paywalls and selling access back to the institutions that produced it. The average journal article costs $35 to read. The average university library spends millions per year on subscriptions to access research its own faculty wrote.
The University of OMXUS rejects this model entirely. Charter Article I mandates that all research produced by the University is published under open-access terms, permanently and without exception. Our commitment encompasses:
This is not generosity. It is the minimum ethical standard for research that claims to serve the public. If the public cannot read it, it does not serve the public.
Research Programmes
Four programmes that anchor the University's research across all twelve faculties.
The Signal Inversion Research Programme began with a single finding from Bond and DePaulo's 2006 meta-analysis of 206 deception detection studies: human accuracy at detecting lies is 54%. A coin flip is 50%. This means the courts, police interviews, jury deliberations, and parole hearings that our justice systems depend on perform at rates statistically indistinguishable from random chance.
The programme extends this finding into 10 reproducible studies demonstrating that the error is not random — it is systematic. Institutions consistently reward the markers of deception (confidence, narrative coherence, emotional control, certainty) and penalise the markers of honest communication (hesitation, self-contradiction, emotional expression, uncertainty). The signal is not merely missed. It is inverted.
Every study includes its full dataset and executable analysis code. Download them. Run them. If the findings do not survive your scrutiny, they do not deserve your trust.
Bayesian prediction models are used across institutional decision-making: risk assessment in criminal justice, diagnostic tools in medicine, predictive policing algorithms, child protection scoring systems, credit scoring, and insurance actuarial tables. All of them require a "prior" — an initial probability estimate — before they can begin calculating.
The Prior Problem programme demonstrates, across 5 reproducible studies, that these priors encode existing biases as mathematical certainty. A risk assessment tool trained on historical conviction data produces higher risk scores for demographics that were historically over-policed — not because those demographics are higher-risk, but because the training data reflects the policing bias. The model then produces predictions that confirm the bias, which generates new data that further reinforces the model. The loop is self-sealing.
The result is algorithmic laundering: the conversion of human prejudice into mathematical authority. The models don't predict reality. They produce reality. And they produce it in a way that validates themselves while harming their subjects.
The Mellor Five Domains Model (2020) is the international standard for assessing whether a captive environment meets the biological and psychological needs of the organism within it. It evaluates five domains: nutrition, environment, health, behavioural interaction, and mental state. It is used in every accredited zoo on Earth.
The University applied the Mellor framework — unchanged — to the living conditions of Homo sapiens in modern industrialised nations. The assessment spans 8 papers and a 133,000-word book. The findings: modern human habitation fails on nutrition (industrial food systems, 3,000 untested additives), environment (93% indoor confinement, circadian disruption), behavioural interaction (systematic removal of play, Dunbar-number violations), and mental state (chronic stress, rising anxiety and depression). Four of five domains. By the standards we apply to captive gorillas, the human enclosure is not adequate.
This is not metaphor. It is the same framework, applied to the same kingdom, the same order, the same family. The methodology does not care that the subject species built the enclosure themselves.
Tied with Technology as the largest faculty by paper count (16 papers), the Justice Architecture programme documents the structural mechanics of a system designed for conviction, not justice. The centrepiece is Constructed Guilt (26,146 words), which traces the production of guilt through seven institutional sites — from the moment of arrest through media coverage to jury deliberation — and demonstrates that at each site, the system's structure produces guilt regardless of the subject's actual culpability.
Key findings include: conviction rates function as prosecutorial performance metrics, creating incentive structures that reward conviction independent of guilt. Confession protocols (the Reid Technique and its descendants) produce false confessions at documented rates in laboratory and field settings. Credibility assessment in courtroom settings inverts truth signals (see: Signal Inversion programme). Neurodivergent populations — particularly autistic individuals — are systematically disadvantaged by communication norms that conflate neurotypical presentation with truthfulness.
The programme does not advocate for reform of the existing system. It documents why reform from within a system designed for conviction is structurally impossible, and examines alternatives with measured outcomes.
Research Output
The complete distribution of research across the University's twelve academic faculties.
| Faculty | Papers | Key Topics |
|---|---|---|
| Justice & Systems | 16 | Constructed guilt, false confessions, credibility inversion, conviction incentives, neurodivergent vulnerability, prevention over punishment |
| Technology & Sovereignty | 16 | Sovereign AI, platform sovereignty, mesh networking, cryptographic identity, protocol autonomy, the uncensorable library |
| Health, Food & Body | 9 | Dermal absorption, food toxicology, gut-brain axis, circadian disruption, barefoot biomechanics, Kitava reference population |
| Movement & Enclosure | 8 | Five Domains framework, 93% indoor confinement, play deprivation, isolation machine, movement infrastructure removal |
| Economics & Work | 7 | $19 trillion paradox, bullshit jobs (37%), 22-hour week, cooperative capitalism, Mondragon, productivity-wage divergence |
| Education & Psychology | 7 | Prussian model, terror management, ideological Rorschach, Two Monkey Theory, bystander effect, obedience factory |
| Signal Inversion | 7 + 10 studies | 54% deception detection, systematic credibility inversion, confession linguistics, cross-cultural analysis |
| Democracy & Governance | 5 | 178 years of Swiss evidence, quadratic voting, trust-first governance, Dunbar-scale federation |
| Sanctuary & Design | 5 | Zoological framework, grief-to-design methodology, 14 design goals, prevention-first specifications |
| Emergency Response | 3 | Hatzolah (sub-3-min), CAHOOTS (0.01% police backup), surf lifesaving, housing first |
| Drugs & Harm Reduction | 2 | Portugal (80% overdose reduction), Rat Park, racial origins of prohibition, pharmacies over car parks |
| Identity & Proof | 2 | Deterministic cryptographic identity, Sybil resistance, self-sovereign proof |
Access
All research is freely accessible through multiple channels. No registration. No paywall. No institutional subscription.
The University's primary publishing platform. All 96 papers, fully searchable and readable online. The definitive collection.
Complete catalogue organised by faculty, with titles, authors, word counts, and direct links to every paper, book, and study.
Dedicated research portal for the 10 reproducible credibility studies. Interactive analysis tools. Downloadable datasets.
All analysis code, all datasets, all build tools. Open source. CC BY 4.0. Clone it, fork it, reproduce it, extend it, improve it.
The University of OMXUS holds its own research to the same standard it applies to the systems it studies. The following methodological commitments are encoded in the Charter and enforced through the Ethics Board review process.
No finding is published without its complete replication package: source data, analysis code, and step-by-step methodology. "Trust the experts" is not a citation. "Our proprietary model suggests" is not evidence. A finding that cannot be independently reproduced by a reader with access to the published materials has not been demonstrated — it has been asserted. This University publishes demonstrations. Every one. No exceptions.
Research is reviewed not merely for methodological soundness but for engagement with the strongest possible counter-argument. Charter Article VI requires every paper to steelman the opposition before publishing its own thesis. A paper on cooperative capitalism must engage with the best arguments for competitive markets. A paper on decriminalisation must engage with the best arguments for prohibition. If you cannot articulate why a thoughtful person might disagree with you, you do not understand your own position well enough to publish it.
We report effect sizes, confidence intervals, and practical significance. The p < 0.05 threshold — the standard in most academic publishing — is insufficient for publication at the University of OMXUS. A statistically significant trivial effect is a trivial finding. We want to know not just whether an effect exists, but whether it is large enough to matter. The replication crisis in psychology, medicine, and social science was produced by institutions that confused statistical significance with importance. We do not make that confusion.
Every paper includes an explicit limitations section that acknowledges what the research cannot prove, where the data is insufficient, and what counter-evidence exists. Research that does not acknowledge its own boundaries is not scholarship — it is advocacy. Both have value. They have different names because they are different things.
University research is not static. When new evidence emerges, papers are updated. When errors are found, they are corrected publicly with version history preserved. When a finding is superseded, the original paper links to the superseding work. Academic publishing traditionally treats papers as permanent artefacts. We treat them as living documents that improve with scrutiny.