Modeling, Constraint, and the Systemization of Civilization by Curt Doolittle I.

Modeling, Constraint, and the Systemization of Civilization

by Curt Doolittle

I. Introduction: An Outsider’s Problem

I think of myself as a scientist that researches epistemology. I have almost nothing in common with philosophers outside of a very few from the 20th century. Even then I approach their work from the scientific method and in particular the methods of computer science, while retaining loyalty to economics as the equivalent of, and extension of, physics in biology and behavior.

I’ve often been told my work feels alien, even to those who grasp its depth. And for years, I struggled to explain why. I’m not a traditional philosopher. I’m not a political theorist. I’m not even an economist in the academic sense. And yet, I’ve built what few within those traditions have achieved: a complete, operational system for modeling and governing human cooperation under constraint.

The reason is simple: I think differently. My training was different. My tools were different. My standards of success were different. I didn’t study ideas to debate them. I modeled systems to see if they could survive. Where others were trying to justify beliefs, I was trying to simulate cooperation at scale under adversarial and evolutionary pressure.

In this article I’ll try to explain why. Not only to help you understand my work, but to help me explain why it feels, and can be, challenging.

II. Constraint vs. Justification: The Great Divide

Most intellectuals are trained in justificatory reasoning. They begin with a belief—human dignity, equality, liberty, justice—and then build arguments to justify those beliefs. They use analogies, metaphors, traditions, and intuitions. This is the dominant method in philosophy, law, ethics, and politics.

But that was never my method. From early on, I was immersed in constraint systems: relational databases, state machines, object-oriented design, and behavior modeling. I wasn’t asking, “What should we believe?” I was asking, “What survives mutation, recursion, noise, asymmetry, and adversarial input?”

This isn’t a difference in emphasis. It’s a complete difference in epistemology.

I learned early that systems must survive constraint, not argument. In software, in logistics, in simulation—you don’t win with persuasion. You win with computable reliability.

So when I turned my attention to human systems—law, economics, governance—I carried that constraint-first logic with me. And I started to see clearly: the failure modes of our civilization are not ideological. They are architectural. They result from unverifiable claims, unmeasurable policies, unjustifiable asymmetries, and moral systems too vague to enforce.

III. Programming as Epistemology

Marvin Minsky once said that programming is not just a technical skill—it is a new way of thinking. And he was right. Programming rewires your brain. It trains you to:

Think in systems of interacting agents.

Model causality, not just correlation.

Define terms operationally, not rhetorically.

Iterate and refactor for resilience under change.

Accept only what can be compiled, executed, and tested.

That’s a fundamentally different mental architecture than that of most philosophers, theologians, or political theorists.

It’s not about argument. It’s about constructibility.

And this insight changed everything for me. I stopped looking for compelling stories and started looking for models that didn’t collapse under recursion. My brain stopped thinking in metaphors and started thinking in grammars, schemas, and state transitions.

This mode of thought is rare in the academy. But it is essential if your goal is not to win an argument—but to engineer a civilization.

IV. Modeling Human Action from Beginning to End

Over the course of my career, I’ve modeled:

The cognitive inputs to human behavior (perception, valuation, instinct).

The economic expressions of that behavior (preferences, trade, institutions).

The legal consequences of those behaviors (disputes, resolutions, enforcement).

This means I didn’t just study one domain. I modeled the entire causal chain:

Cognition →

Incentive →

Action →

Conflict →

Adjudication →

Restitution

And I noticed something crucial: the same logical structure reappeared at every level.

That structure was evolutionary computation.

Trial and error.

Cost and benefit.

Variation and selection.

Reciprocity and punishment.

In other words: the universe behaves as a cooperative computation under constraint, and so must any successful human system.

So I asked the natural next question: Can we model that process at every level of civilization—cognitive, moral, legal, economic, and political? And the answer was yes.

But no one had done it—because no one had unified those grammars under the same method of operational, testable, decidable reasoning.

V. Stories vs. Simulations

Most intellectual traditions are still built around narratives:

Plato: allegories.

Hegel: dialectics.

Rawls: thought experiments.

Marx: historical inevitabilities.

Even most economists rely on idealized simplifications.

But I don’t think in narratives. I think in simulations.

I model actors.

I define constraints.

I calculate outcomes.

I test for failure modes.

This is why my work often feels alien to others. I’m not using their grammar. I’m not offering a story. I’m offering a compiler—a machine for deciding moral, legal, and institutional questions under real-world constraints.

This is why I define truth not as “correspondence” or “coherence,” but as survival under adversarial recursion with no externalities. That is a systems definition of truth. And it forces an entirely new set of constraints on what can be claimed, believed, or enforced.

VI. What Emerged: A Civilizational Operating System

What emerged from this lifelong modeling wasn’t a “theory.” It was a constructive logic of human cooperation. A universal language for modeling truth, reciprocity, and decidability.

I built:

A grammar of operational speech.

A system of reciprocal insurance.

A legal architecture based on testifiability and restitution.

An economic model based on bounded rationality under evolutionary constraint.

A political model based on institutional decidability rather than discretion.

I didn’t invent moral philosophy. I engineered moral computability.

This is what I call Natural Law—not the mystical kind, not the theological kind, but the operational structure of all sustainable cooperation.

And it works because it obeys the same rules the universe does:

Scarcity

Entropy

Evolution

Computation

Reciprocity

Testability

Decidability

No metaphysics. No utopias. Just the minimum viable grammar of cooperation that does not fail at scale.

VII. Why It Had to Be Built

I began to see this clearly in the 1990s. Progressive thought was collapsing into scripted talking points. Conservative thought was collapsing into ineffectual moralizing. And no one—not left, right, or center—was answering hard questions in operational, value-neutral, measurable terms.

It was obvious what was coming: pseudoscience, institutional capture, epistemic collapse, and eventually civil war. And that’s what we’re living through now.

So I made a decision. I would build the language of truth and cooperation that our institutions failed to produce.

Not because I had all the answers. But because no one else was even asking the right questions in the right language.

That decision cost me wealth, relationships, status—and I don’t regret it. Because the world doesn’t need another ideology. It needs a system of decidability that can constrain all ideologies.

That’s what I built. That’s what this is. And now, finally, I’m teaching it.


Source date (UTC): 2025-05-08 06:49:08 UTC

Original post: https://x.com/i/articles/1920370364716363777

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *