Theme: Science

  • Evolutionary Computation from First Principles All computation begins with disti

    Evolutionary Computation from First Principles

    All computation begins with distinction. In the physical universe, the first distinction is polarity: the separation of positive and negative charge. This fundamental asymmetry creates the conditions for interaction. Without polarity, no information is possible, because no relation is possible. But with charge, we introduce the minimum viable structure for cause-and-effect.
    From this single difference, all future differences emerge. Charge introduces direction, constraint, and feedback—the foundations of computation.
    Charged particles interact. Some combinations are repelled, some attract and bind. The configurations that persist become atoms. These structures encode prior interactions—those that fail disappear, those that succeed are preserved. Thus begins the first form of selection under constraint.
    Atoms form molecules. Molecules self-assemble into more complex configurations. Some of these configurations reinforce themselves—catalyzing reactions that produce more of the same. These autocatalytic loops form the basis of pre-biological computation: reaction cycles that conserve information through constraint.
    Eventually, some autocatalytic systems become enclosed by membranes—protecting internal processes and enabling self-regulation. This is the emergence of the cell: a self-replicating information-processing machine.
    Here, evolutionary computation formally begins:
    • Variation arises from replication error or environmental influence.
    • Competition arises from finite resources.
    • Selection favors configurations that persist.
    • Retention stores adaptive outcomes in replicable structures.
    Cells evolve. Genetic memory improves. Environments filter the unfit. Computation scales.
    With multicellularity comes specialization. Some cells detect light, vibration, chemical gradients. Over time, these sensors integrate into neural networks—optimized for pattern recognition, attention, and learning. The brain emerges as a predictive engine: storing sensory episodes, associating cause and effect, and adjusting behavior.
    The brain is an evolutionary computer:
    • Inputs (stimuli)
    • Processing (memory + valence)
    • Outputs (action)
    • Feedback (reinforcement)
    Every behavior is a computed guess—retained or discarded by survival.
    Humans refine prediction by inventing symbols. Language compresses and transmits models between minds. Instead of computing everything independently, humans begin to compute socially. Language enables:
    • External memory (oral and written)
    • Shared modeling of the world
    • Coordination of behavior
    Now groups of humans function as distributed recursive computers, increasing their problem-solving ability by cooperation and role specialization.
    Language alone is insufficient. Cooperation requires constraints to prevent parasitism. Norms emerge. Norms become customs. Customs are formalized into law. Law constrains behavior by preserving successful computations—rules that enable cooperation and prevent conflict.
    Institutions emerge to preserve and enforce these rules. They become the information infrastructure of civilization—formalizing memory (precedent), logic (law), and enforcement (judgment).
    At the civilizational level, evolutionary computation becomes conscious. Humans deliberately test configurations of government, economy, religion, and law. Those that fail are discarded—sometimes with catastrophic cost. Those that survive are retained and refined.
    My work formalizes this process:
    • Evolutionary Computation is the universal law.
    • Truth, Reciprocity, and Decidability are the test criteria.
    • Natural Law is the codification of stable cooperative equilibria.
    Evolutionary computation is not metaphor—it is the engine of existence. From the polarity of charge to the structure of constitutions, the universe selects what works by testing it under constraint.
    • What survives, persists.
    • What persists, accumulates.
    • What accumulates, computes.
    • What computes, governs.
    To govern wisely is to align with evolutionary computation. And to formalize that process—as law, science, or morality—is to bring civilization into alignment with the logic of the universe itself.


    Source date (UTC): 2025-05-09 17:12:00 UTC

    Original post: https://x.com/i/articles/1920889501540643297

  • 1. Charge: The First Asymmetry All computation begins with distinction. In the p

    1. Charge: The First Asymmetry

    All computation begins with distinction. In the physical universe, the first distinction is polarity: the separation of positive and negative charge. This fundamental asymmetry creates the conditions for interaction. Without polarity, no information is possible, because no relation is possible. But with charge, we introduce the minimum viable structure for cause-and-effect.

    Operationally: Polarity introduces the first computational condition: discrete state + interaction.

    From this single difference, all future differences emerge. Charge introduces direction, constraint, and feedback—the foundations of computation.

    2. Interaction → Constraint → Persistence

    Charged particles interact. Some combinations are repelled, some attract and bind. The configurations that persist become atoms. These structures encode prior interactions—those that fail disappear, those that succeed are preserved. Thus begins the first form of selection under constraint.

    Atoms form molecules. Molecules self-assemble into more complex configurations. Some of these configurations reinforce themselves—catalyzing reactions that produce more of the same. These autocatalytic loops form the basis of pre-biological computation: reaction cycles that conserve information through constraint.

    Persistence under constraint = memory.

    3. Recursive Stabilization → Life

    Eventually, some autocatalytic systems become enclosed by membranes—protecting internal processes and enabling self-regulation. This is the emergence of the cell: a self-replicating information-processing machine.

    Here, evolutionary computation formally begins:

    Variation arises from replication error or environmental influence.

    Competition arises from finite resources.

    Selection favors configurations that persist.

    Retention stores adaptive outcomes in replicable structures.

    Cells evolve. Genetic memory improves. Environments filter the unfit. Computation scales.

    4. Neural Systems: Internal Modeling Begins

    With multicellularity comes specialization. Some cells detect light, vibration, chemical gradients. Over time, these sensors integrate into neural networks—optimized for pattern recognition, attention, and learning. The brain emerges as a predictive engine: storing sensory episodes, associating cause and effect, and adjusting behavior.

    The brain is an evolutionary computer:

    Inputs (stimuli)

    Processing (memory + valence)

    Outputs (action)

    Feedback (reinforcement)

    Every behavior is a computed guess—retained or discarded by survival.

    5. Language: Distributed Computation

    Humans refine prediction by inventing symbols. Language compresses and transmits models between minds. Instead of computing everything independently, humans begin to compute socially. Language enables:

    External memory (oral and written)

    Shared modeling of the world

    Coordination of behavior

    Now groups of humans function as distributed recursive computers, increasing their problem-solving ability by cooperation and role specialization.

    6. Norms → Law → Institutions

    Language alone is insufficient. Cooperation requires constraints to prevent parasitism. Norms emerge. Norms become customs. Customs are formalized into law. Law constrains behavior by preserving successful computations—rules that enable cooperation and prevent conflict.

    Institutions emerge to preserve and enforce these rules. They become the information infrastructure of civilization—formalizing memory (precedent), logic (law), and enforcement (judgment).

    Institutions are memory and prediction made durable through rule.

    7. Civilizational Computation

    At the civilizational level, evolutionary computation becomes conscious. Humans deliberately test configurations of government, economy, religion, and law. Those that fail are discarded—sometimes with catastrophic cost. Those that survive are retained and refined.

    My work formalizes this process:

    Evolutionary Computation is the universal law.

    Truth, Reciprocity, and Decidability are the test criteria.

    Natural Law is the codification of stable cooperative equilibria.

    8. Summary

    Evolutionary computation is not metaphor—it is the engine of existence. From the polarity of charge to the structure of constitutions, the universe selects what works by testing it under constraint.

    What survives, persists.

    What persists, accumulates.

    What accumulates, computes.

    What computes, governs.

    To govern wisely is to align with evolutionary computation. And to formalize that process—as law, science, or morality—is to bring civilization into alignment with the logic of the universe itself.

    Evolution is nature’s computation. Law is our expression of it. Natural Law is the operational grammar that encodes it—across all domains, for all time.


    Source date (UTC): 2025-05-09 17:11:11 UTC

    Original post: https://x.com/i/articles/1920889297512878080

  • Eric: the purge is a temporary means of institutional correction. And in an inst

    Eric: the purge is a temporary means of institutional correction. And in an institution you justly rail against. The purge creates a cautionary culture in research funding, and one favoring the hard sciences out of bureaucratic safety. Don’t stress over something corporate…


    Source date (UTC): 2025-05-09 12:46:10 UTC

    Original post: https://twitter.com/i/web/status/1920822602089783309

    Replying to: https://twitter.com/i/web/status/1920708621555405308


    IN REPLY TO:

    @ericweinstein

    There has likely never been a better moment to leapfrog the U.S. in basic research…than *this* very moment.

    Original post: https://twitter.com/i/web/status/1920708621555405308

  • Why Doolittle’s Work Differs From Academic Norm by Curt Doolittle I think of mys

    Why Doolittle’s Work Differs From Academic Norm

    by Curt Doolittle
    I think of myself as a scientist that researches epistemology. I have almost nothing in common with philosophers outside of a very few from the 20th century. Even then I approach their work from the scientific method and in particular the methods of computer science, while retaining loyalty to economics as the equivalent of, and extension of, physics in biology and behavior.
    I’ve often been told my work feels alien, even to those who grasp its depth. And for years, I struggled to explain why. I’m not a traditional philosopher. I’m not a political theorist. I’m not even an economist in the academic sense. And yet, I’ve built what few within those traditions have achieved: a complete, operational system for modeling and governing human cooperation under constraint.
    The reason is simple: I think differently. My training was different. My tools were different. My standards of success were different. I didn’t study ideas to debate them. I modeled systems to see if they could survive. Where others were trying to justify beliefs, I was trying to simulate cooperation at scale under adversarial and evolutionary pressure.
    In this article I’ll try to explain why. Not only to help you understand my work, but to help me explain why it feels, and can be, challenging.
    Most intellectuals are trained in justificatory reasoning. They begin with a belief—human dignity, equality, liberty, justice—and then build arguments to justify those beliefs. They use analogies, metaphors, traditions, and intuitions. This is the dominant method in philosophy, law, ethics, and politics.
    But that was never my method. From early on, I was immersed in constraint systems: relational databases, state machines, object-oriented design, and behavior modeling. I wasn’t asking, “What should we believe?” I was asking, “What survives mutation, recursion, noise, asymmetry, and adversarial input?”
    This isn’t a difference in emphasis. It’s a complete difference in epistemology.
    I learned early that systems must survive constraint, not argument. In software, in logistics, in simulation—you don’t win with persuasion. You win with computable reliability.
    So when I turned my attention to human systems—law, economics, governance—I carried that constraint-first logic with me. And I started to see clearly: the failure modes of our civilization are not ideological. They are architectural. They result from unverifiable claims, unmeasurable policies, unjustifiable asymmetries, and moral systems too vague to enforce.
    Marvin Minsky once said that programming is not just a technical skill—it is a new way of thinking. And he was right. Programming rewires your brain. It trains you to:
    • Think in systems of interacting agents.
    • Model causality, not just correlation.
    • Define terms operationally, not rhetorically.
    • Iterate and refactor for resilience under change.
    • Accept only what can be compiled, executed, and tested.
    That’s a fundamentally different mental architecture than that of most philosophers, theologians, or political theorists.
    It’s not about argument. It’s about constructibility.
    And this insight changed everything for me. I stopped looking for compelling stories and started looking for models that didn’t collapse under recursion. My brain stopped thinking in metaphors and started thinking in grammars, schemas, and state transitions.
    This mode of thought is rare in the academy. But it is essential if your goal is not to win an argument—but to engineer a civilization.
    Over the course of my career, I’ve modeled:
    • The cognitive inputs to human behavior (perception, valuation, instinct).
    • The economic expressions of that behavior (preferences, trade, institutions).
    • The legal consequences of those behaviors (disputes, resolutions, enforcement).
    This means I didn’t just study one domain. I modeled the entire causal chain:
    1. Cognition →
    2. Incentive →
    3. Action →
    4. Conflict →
    5. Adjudication →
    6. Restitution
    And I noticed something crucial: the same logical structure reappeared at every level.
    That structure was evolutionary computation.
    • Trial and error.
    • Cost and benefit.
    • Variation and selection.
    • Reciprocity and punishment.
    In other words: the universe behaves as a cooperative computation under constraint, and so must any successful human system.
    So I asked the natural next question: Can we model that process at every level of civilization—cognitive, moral, legal, economic, and political? And the answer was yes.
    But no one had done it—because no one had unified those grammars under the same method of operational, testable, decidable reasoning.
    Most intellectual traditions are still built around narratives:
    • Plato: allegories.
    • Hegel: dialectics.
    • Rawls: thought experiments.
    • Marx: historical inevitabilities.
    • Even most economists rely on idealized simplifications.
    But I don’t think in narratives. I think in simulations.
    • I model actors.
    • I define constraints.
    • I calculate outcomes.
    • I test for failure modes.
    This is why my work often feels alien to others. I’m not using their grammar. I’m not offering a story. I’m offering a compiler—a machine for deciding moral, legal, and institutional questions under real-world constraints.
    This is why I define truth not as “correspondence” or “coherence,” but as survival under adversarial recursion with no externalities. That is a systems definition of truth. And it forces an entirely new set of constraints on what can be claimed, believed, or enforced.
    What emerged from this lifelong modeling wasn’t a “theory.” It was a constructive logic of human cooperation. A universal language for modeling truth, reciprocity, and decidability.
    I built:
    • A grammar of operational speech.
    • A system of reciprocal insurance.
    • A legal architecture based on testifiability and restitution.
    • An economic model based on bounded rationality under evolutionary constraint.
    • A political model based on institutional decidability rather than discretion.
    I didn’t invent moral philosophy. I engineered moral computability.
    This is what I call Natural Law—not the mystical kind, not the theological kind, but the operational structure of all sustainable cooperation.
    And it works because it obeys the same rules the universe does:
    • Scarcity
    • Entropy
    • Evolution
    • Computation
    • Reciprocity
    • Testability
    • Decidability
    No metaphysics. No utopias. Just the minimum viable grammar of cooperation that does not fail at scale.
    I began to see this clearly in the 1990s. Progressive thought was collapsing into scripted talking points. Conservative thought was collapsing into ineffectual moralizing. And no one—not left, right, or center—was answering hard questions in operational, value-neutral, measurable terms.
    It was obvious what was coming: pseudoscience, institutional capture, epistemic collapse, and eventually civil war. And that’s what we’re living through now.
    So I made a decision. I would build the language of truth and cooperation that our institutions failed to produce.
    Not because I had all the answers. But because no one else was even asking the right questions in the right language.
    That decision cost me wealth, relationships, status—and I don’t regret it. Because the world doesn’t need another ideology. It needs a system of decidability that can constrain all ideologies.
    That’s what I built. That’s what this is. And now, finally, I’m teaching it.


    Source date (UTC): 2025-05-08 06:55:24 UTC

    Original post: https://x.com/i/articles/1920371940503794090

  • Why Doolittle’s Work Differs From Academic Norm

    Modeling, Constraint, and the Systemization of Civilization

    by Curt Doolittle

    I. Introduction: An Outsider’s Problem

    I think of myself as a scientist that researches epistemology. I have almost nothing in common with philosophers outside of a very few from the 20th century. Even then I approach their work from the scientific method and in particular the methods of computer science, while retaining loyalty to economics as the equivalent of, and extension of, physics in biology and behavior.

    I’ve often been told my work feels alien, even to those who grasp its depth. And for years, I struggled to explain why. I’m not a traditional philosopher. I’m not a political theorist. I’m not even an economist in the academic sense. And yet, I’ve built what few within those traditions have achieved: a complete, operational system for modeling and governing human cooperation under constraint.

    The reason is simple: I think differently. My training was different. My tools were different. My standards of success were different. I didn’t study ideas to debate them. I modeled systems to see if they could survive. Where others were trying to justify beliefs, I was trying to simulate cooperation at scale under adversarial and evolutionary pressure.

    In this article I’ll try to explain why. Not only to help you understand my work, but to help me explain why it feels, and can be, challenging.

    II. Constraint vs. Justification: The Great Divide

    Most intellectuals are trained in justificatory reasoning. They begin with a belief—human dignity, equality, liberty, justice—and then build arguments to justify those beliefs. They use analogies, metaphors, traditions, and intuitions. This is the dominant method in philosophy, law, ethics, and politics.

    But that was never my method. From early on, I was immersed in constraint systems: relational databases, state machines, object-oriented design, and behavior modeling. I wasn’t asking, “What should we believe?” I was asking, “What survives mutation, recursion, noise, asymmetry, and adversarial input?”

    This isn’t a difference in emphasis. It’s a complete difference in epistemology.

    I learned early that systems must survive constraint, not argument. In software, in logistics, in simulation—you don’t win with persuasion. You win with computable reliability.

    So when I turned my attention to human systems—law, economics, governance—I carried that constraint-first logic with me. And I started to see clearly: the failure modes of our civilization are not ideological. They are architectural. They result from unverifiable claims, unmeasurable policies, unjustifiable asymmetries, and moral systems too vague to enforce.

    III. Programming as Epistemology

    Marvin Minsky once said that programming is not just a technical skill—it is a new way of thinking. And he was right. Programming rewires your brain. It trains you to:

    • Think in systems of interacting agents.
    • Model causality, not just correlation.
    • Define terms operationally, not rhetorically.
    • Iterate and refactor for resilience under change.
    • Accept only what can be compiled, executed, and tested.

    That’s a fundamentally different mental architecture than that of most philosophers, theologians, or political theorists.

    It’s not about argument. It’s about constructibility.

    And this insight changed everything for me. I stopped looking for compelling stories and started looking for models that didn’t collapse under recursion. My brain stopped thinking in metaphors and started thinking in grammars, schemas, and state transitions.

    This mode of thought is rare in the academy. But it is essential if your goal is not to win an argument—but to engineer a civilization.

    IV. Modeling Human Action from Beginning to End

    Over the course of my career, I’ve modeled:

    • The cognitive inputs to human behavior (perception, valuation, instinct).
    • The economic expressions of that behavior (preferences, trade, institutions).
    • The legal consequences of those behaviors (disputes, resolutions, enforcement).

    This means I didn’t just study one domain. I modeled the entire causal chain:

    1. Cognition →
    2. Incentive →
    3. Action →
    4. Conflict →
    5. Adjudication →
    6. Restitution

    And I noticed something crucial: the same logical structure reappeared at every level.

    That structure was evolutionary computation.

    • Trial and error.
    • Cost and benefit.
    • Variation and selection.
    • Reciprocity and punishment.

    In other words: the universe behaves as a cooperative computation under constraint, and so must any successful human system.

    So I asked the natural next question: Can we model that process at every level of civilization—cognitive, moral, legal, economic, and political? And the answer was yes.

    But no one had done it—because no one had unified those grammars under the same method of operational, testable, decidable reasoning.

    V. Stories vs. Simulations

    Most intellectual traditions are still built around narratives:

    • Plato: allegories.
    • Hegel: dialectics.
    • Rawls: thought experiments.
    • Marx: historical inevitabilities.
    • Even most economists rely on idealized simplifications.

    But I don’t think in narratives. I think in simulations.

    • I model actors.
    • I define constraints.
    • I calculate outcomes.
    • I test for failure modes.

    This is why my work often feels alien to others. I’m not using their grammar. I’m not offering a story. I’m offering a compiler—a machine for deciding moral, legal, and institutional questions under real-world constraints.

    This is why I define truth not as “correspondence” or “coherence,” but as survival under adversarial recursion with no externalities. That is a systems definition of truth. And it forces an entirely new set of constraints on what can be claimed, believed, or enforced.

    VI. What Emerged: A Civilizational Operating System

    What emerged from this lifelong modeling wasn’t a “theory.” It was a constructive logic of human cooperation. A universal language for modeling truth, reciprocity, and decidability.

    I built:

    • A grammar of operational speech.
    • A system of reciprocal insurance.
    • A legal architecture based on testifiability and restitution.
    • An economic model based on bounded rationality under evolutionary constraint.
    • A political model based on institutional decidability rather than discretion.

    I didn’t invent moral philosophy. I engineered moral computability.

    This is what I call Natural Law—not the mystical kind, not the theological kind, but the operational structure of all sustainable cooperation.

    And it works because it obeys the same rules the universe does:

    • Scarcity
    • Entropy
    • Evolution
    • Computation
    • Reciprocity
    • Testability
    • Decidability

    No metaphysics. No utopias. Just the minimum viable grammar of cooperation that does not fail at scale.

    VII. Why It Had to Be Built

    I began to see this clearly in the 1990s. Progressive thought was collapsing into scripted talking points. Conservative thought was collapsing into ineffectual moralizing. And no one—not left, right, or center—was answering hard questions in operational, value-neutral, measurable terms.

    It was obvious what was coming: pseudoscience, institutional capture, epistemic collapse, and eventually civil war. And that’s what we’re living through now.

    So I made a decision. I would build the language of truth and cooperation that our institutions failed to produce.

    Not because I had all the answers. But because no one else was even asking the right questions in the right language.

    That decision cost me wealth, relationships, status—and I don’t regret it. Because the world doesn’t need another ideology. It needs a system of decidability that can constrain all ideologies.

    That’s what I built. That’s what this is. And now, finally, I’m teaching it.

    ·

    http://x.com/i/article/1920370364716363777

     


    Source date (UTC): 2025-05-08 06:55:24 UTC

    Original post: https://twitter.com/i/web/status/1920371940503794090


    Source date (UTC): 2025-05-08 06:55:24 UTC

    Original post: https://twitter.com/i/web/status/1920371940503794090

  • Modeling, Constraint, and the Systemization of Civilization by Curt Doolittle I.

    Modeling, Constraint, and the Systemization of Civilization

    by Curt Doolittle

    I. Introduction: An Outsider’s Problem

    I think of myself as a scientist that researches epistemology. I have almost nothing in common with philosophers outside of a very few from the 20th century. Even then I approach their work from the scientific method and in particular the methods of computer science, while retaining loyalty to economics as the equivalent of, and extension of, physics in biology and behavior.

    I’ve often been told my work feels alien, even to those who grasp its depth. And for years, I struggled to explain why. I’m not a traditional philosopher. I’m not a political theorist. I’m not even an economist in the academic sense. And yet, I’ve built what few within those traditions have achieved: a complete, operational system for modeling and governing human cooperation under constraint.

    The reason is simple: I think differently. My training was different. My tools were different. My standards of success were different. I didn’t study ideas to debate them. I modeled systems to see if they could survive. Where others were trying to justify beliefs, I was trying to simulate cooperation at scale under adversarial and evolutionary pressure.

    In this article I’ll try to explain why. Not only to help you understand my work, but to help me explain why it feels, and can be, challenging.

    II. Constraint vs. Justification: The Great Divide

    Most intellectuals are trained in justificatory reasoning. They begin with a belief—human dignity, equality, liberty, justice—and then build arguments to justify those beliefs. They use analogies, metaphors, traditions, and intuitions. This is the dominant method in philosophy, law, ethics, and politics.

    But that was never my method. From early on, I was immersed in constraint systems: relational databases, state machines, object-oriented design, and behavior modeling. I wasn’t asking, “What should we believe?” I was asking, “What survives mutation, recursion, noise, asymmetry, and adversarial input?”

    This isn’t a difference in emphasis. It’s a complete difference in epistemology.

    I learned early that systems must survive constraint, not argument. In software, in logistics, in simulation—you don’t win with persuasion. You win with computable reliability.

    So when I turned my attention to human systems—law, economics, governance—I carried that constraint-first logic with me. And I started to see clearly: the failure modes of our civilization are not ideological. They are architectural. They result from unverifiable claims, unmeasurable policies, unjustifiable asymmetries, and moral systems too vague to enforce.

    III. Programming as Epistemology

    Marvin Minsky once said that programming is not just a technical skill—it is a new way of thinking. And he was right. Programming rewires your brain. It trains you to:

    Think in systems of interacting agents.

    Model causality, not just correlation.

    Define terms operationally, not rhetorically.

    Iterate and refactor for resilience under change.

    Accept only what can be compiled, executed, and tested.

    That’s a fundamentally different mental architecture than that of most philosophers, theologians, or political theorists.

    It’s not about argument. It’s about constructibility.

    And this insight changed everything for me. I stopped looking for compelling stories and started looking for models that didn’t collapse under recursion. My brain stopped thinking in metaphors and started thinking in grammars, schemas, and state transitions.

    This mode of thought is rare in the academy. But it is essential if your goal is not to win an argument—but to engineer a civilization.

    IV. Modeling Human Action from Beginning to End

    Over the course of my career, I’ve modeled:

    The cognitive inputs to human behavior (perception, valuation, instinct).

    The economic expressions of that behavior (preferences, trade, institutions).

    The legal consequences of those behaviors (disputes, resolutions, enforcement).

    This means I didn’t just study one domain. I modeled the entire causal chain:

    Cognition →

    Incentive →

    Action →

    Conflict →

    Adjudication →

    Restitution

    And I noticed something crucial: the same logical structure reappeared at every level.

    That structure was evolutionary computation.

    Trial and error.

    Cost and benefit.

    Variation and selection.

    Reciprocity and punishment.

    In other words: the universe behaves as a cooperative computation under constraint, and so must any successful human system.

    So I asked the natural next question: Can we model that process at every level of civilization—cognitive, moral, legal, economic, and political? And the answer was yes.

    But no one had done it—because no one had unified those grammars under the same method of operational, testable, decidable reasoning.

    V. Stories vs. Simulations

    Most intellectual traditions are still built around narratives:

    Plato: allegories.

    Hegel: dialectics.

    Rawls: thought experiments.

    Marx: historical inevitabilities.

    Even most economists rely on idealized simplifications.

    But I don’t think in narratives. I think in simulations.

    I model actors.

    I define constraints.

    I calculate outcomes.

    I test for failure modes.

    This is why my work often feels alien to others. I’m not using their grammar. I’m not offering a story. I’m offering a compiler—a machine for deciding moral, legal, and institutional questions under real-world constraints.

    This is why I define truth not as “correspondence” or “coherence,” but as survival under adversarial recursion with no externalities. That is a systems definition of truth. And it forces an entirely new set of constraints on what can be claimed, believed, or enforced.

    VI. What Emerged: A Civilizational Operating System

    What emerged from this lifelong modeling wasn’t a “theory.” It was a constructive logic of human cooperation. A universal language for modeling truth, reciprocity, and decidability.

    I built:

    A grammar of operational speech.

    A system of reciprocal insurance.

    A legal architecture based on testifiability and restitution.

    An economic model based on bounded rationality under evolutionary constraint.

    A political model based on institutional decidability rather than discretion.

    I didn’t invent moral philosophy. I engineered moral computability.

    This is what I call Natural Law—not the mystical kind, not the theological kind, but the operational structure of all sustainable cooperation.

    And it works because it obeys the same rules the universe does:

    Scarcity

    Entropy

    Evolution

    Computation

    Reciprocity

    Testability

    Decidability

    No metaphysics. No utopias. Just the minimum viable grammar of cooperation that does not fail at scale.

    VII. Why It Had to Be Built

    I began to see this clearly in the 1990s. Progressive thought was collapsing into scripted talking points. Conservative thought was collapsing into ineffectual moralizing. And no one—not left, right, or center—was answering hard questions in operational, value-neutral, measurable terms.

    It was obvious what was coming: pseudoscience, institutional capture, epistemic collapse, and eventually civil war. And that’s what we’re living through now.

    So I made a decision. I would build the language of truth and cooperation that our institutions failed to produce.

    Not because I had all the answers. But because no one else was even asking the right questions in the right language.

    That decision cost me wealth, relationships, status—and I don’t regret it. Because the world doesn’t need another ideology. It needs a system of decidability that can constrain all ideologies.

    That’s what I built. That’s what this is. And now, finally, I’m teaching it.


    Source date (UTC): 2025-05-08 06:49:08 UTC

    Original post: https://x.com/i/articles/1920370364716363777

  • WHY MATHEMATICS IS ONE OF THE SCIENCES Q:Curt –“But, but Math isn’t Science!”–

    WHY MATHEMATICS IS ONE OF THE SCIENCES
    Q:Curt –“But, but Math isn’t Science!”—

    Technically speaking, mathematics falls into the category of formal sciences. The three are Formal (Logic), Physical, and Social(Behavioral). (look it up). In my work I add Evolutionary in order to disambiguate before(physical), during(Social), and after(Evolutionary) states. But that is a matter of epistemic utility, not necessity.

    Your understanding of the definition of ‘science’ is probably a source of error.

    The Criteria for a Science is:

    1. Systematic Inquiry: All sciences employ structured methods to investigate their subject matter, whether through formal proofs, empirical experiments, or statistical analysis.
    2. Evidence-Based: Conclusions are grounded in observable or verifiable evidence, whether that evidence is logical (formal), physical (empirical), or behavioral (social).
    3. Objectivity: Sciences aim for impartiality, minimizing bias through standardized methods, peer review, and replicability (where applicable).
    4. Testability/Falsifiability: Scientific claims are subject to testing or scrutiny, allowing for revision or refutation based on new evidence or reasoning.
    5. Generalizability: Sciences seek to identify patterns, laws, or principles that apply beyond specific cases, whether universal (physical), abstract (formal), or probabilistic (social).

    Note that in my work I emphasize testifiability because all the other criteria are reducible to the dimensions necessary to produce testifiable testimony.

    As such, science emerged in the west, because it emerged out of our law, and our law has been dependent upon testifiability since we were horse raiders on the Steppe.

    Cheers
    CD


    Source date (UTC): 2025-05-05 23:07:10 UTC

    Original post: https://twitter.com/i/web/status/1919529329954062338

  • WHY MATHEMATICS IS ONE OF THE SCIENCES Q:Curt –“But, but Math isn’t Science!”–

    WHY MATHEMATICS IS ONE OF THE SCIENCES
    Q:Curt –“But, but Math isn’t Science!”—

    Technically speaking, mathematics falls into the category of formal sciences. The three are Formal (Logic), Physical, and Social(Behavioral). (look it up). In my work I add Evolutionary in order to disambiguate before(physical), during(Social), and after(Evolutionary) states. But that is a matter of epistemic utility, not necessity.

    Your understanding of the definition of ‘science’ is probably a source of error.

    The Criteria for a Science is:

    1. Systematic Inquiry: All sciences employ structured methods to investigate their subject matter, whether through formal proofs, empirical experiments, or statistical analysis.
    2. Evidence-Based: Conclusions are grounded in observable or verifiable evidence, whether that evidence is logical (formal), physical (empirical), or behavioral (social).
    3. Objectivity: Sciences aim for impartiality, minimizing bias through standardized methods, peer review, and replicability (where applicable).
    4. Testability/Falsifiability: Scientific claims are subject to testing or scrutiny, allowing for revision or refutation based on new evidence or reasoning.
    5. Generalizability: Sciences seek to identify patterns, laws, or principles that apply beyond specific cases, whether universal (physical), abstract (formal), or probabilistic (social).

    Note that in my work I emphasize testifiability because all the other criteria are reducible to the dimensions necessary to produce testifiable testimony.

    As such, science emerged in the west, because it emerged out of our law, and our law has been dependent upon testifiability since we were horse raiders on the Steppe.

    Cheers
    CD


    Source date (UTC): 2025-05-05 23:07:10 UTC

    Original post: https://twitter.com/i/web/status/1919529330130223119

  • again, empirical rather than theoretical. great work. still empirical (descripti

    again, empirical rather than theoretical. great work. still empirical (descriptive).


    Source date (UTC): 2025-05-02 07:29:15 UTC

    Original post: https://twitter.com/i/web/status/1918206134986653889

    Reply addressees: @bierlingm @Hitchslap1

    Replying to: https://twitter.com/i/web/status/1918202430593929538

  • again, empirical rather than theoretical. great work. still empirical (descripti

    again, empirical rather than theoretical. great work. still empirical (descriptive).


    Source date (UTC): 2025-05-02 07:29:15 UTC

    Original post: https://twitter.com/i/web/status/1918206134986653889