Author: Curt Doolittle

  • hyphen: connector of compound words: hyphen key – en-dash: a range: mac:option-h

    – hyphen: connector of compound words: hyphen key
    – en-dash: a range: mac:option-hyphen or win:Alt + 0150
    — em-dash: a parenthetical: mac:shift-option-hyphen or win:Alt + 0151


    Source date (UTC): 2025-05-08 07:51:49 UTC

    Original post: https://twitter.com/i/web/status/1920386138134753372

  • hyphen: connector of compound words: hyphen key – en-dash: a range: mac:option-h

    – hyphen: connector of compound words: hyphen key
    – en-dash: a range: mac:option-hyphen or win:Alt + 0150
    — em-dash: a parenthetical: mac:shift-option-hyphen or win:Alt + 0151


    Source date (UTC): 2025-05-08 07:51:49 UTC

    Original post: https://twitter.com/i/web/status/1920386138034089988

  • Yes. Regularly. But the criticisms are always the same. And usually appeal to no

    Yes. Regularly. But the criticisms are always the same. And usually appeal to normativity.


    Source date (UTC): 2025-05-08 07:37:26 UTC

    Original post: https://twitter.com/i/web/status/1920382520689864938

    Reply addressees: @FlareVox

    Replying to: https://twitter.com/i/web/status/1920330862513557876


    IN REPLY TO:

    Original post on X

    Original tweet unavailable — we could not load the text of the post this reply is addressing on X. That usually means the tweet was deleted, the account is protected, or X does not expose it to the account used for archiving. The Original post link below may still open if you view it in X while signed in.

    Original post: https://twitter.com/i/web/status/1920330862513557876

  • Doh! 😉

    Doh! 😉


    Source date (UTC): 2025-05-08 07:35:35 UTC

    Original post: https://twitter.com/i/web/status/1920382052840390935

    Reply addressees: @bierlingm

    Replying to: https://twitter.com/i/web/status/1920375083572379653


    IN REPLY TO:

    @bierlingm

    @curtdoolittle In the long tradition of Anglo population programming 😜

    Original post: https://twitter.com/i/web/status/1920375083572379653

  • Why Doolittle’s Work Differs From Academic Norm

    Modeling, Constraint, and the Systemization of Civilization

    by Curt Doolittle

    I. Introduction: An Outsider’s Problem

    I think of myself as a scientist that researches epistemology. I have almost nothing in common with philosophers outside of a very few from the 20th century. Even then I approach their work from the scientific method and in particular the methods of computer science, while retaining loyalty to economics as the equivalent of, and extension of, physics in biology and behavior.

    I’ve often been told my work feels alien, even to those who grasp its depth. And for years, I struggled to explain why. I’m not a traditional philosopher. I’m not a political theorist. I’m not even an economist in the academic sense. And yet, I’ve built what few within those traditions have achieved: a complete, operational system for modeling and governing human cooperation under constraint.

    The reason is simple: I think differently. My training was different. My tools were different. My standards of success were different. I didn’t study ideas to debate them. I modeled systems to see if they could survive. Where others were trying to justify beliefs, I was trying to simulate cooperation at scale under adversarial and evolutionary pressure.

    In this article I’ll try to explain why. Not only to help you understand my work, but to help me explain why it feels, and can be, challenging.

    II. Constraint vs. Justification: The Great Divide

    Most intellectuals are trained in justificatory reasoning. They begin with a belief—human dignity, equality, liberty, justice—and then build arguments to justify those beliefs. They use analogies, metaphors, traditions, and intuitions. This is the dominant method in philosophy, law, ethics, and politics.

    But that was never my method. From early on, I was immersed in constraint systems: relational databases, state machines, object-oriented design, and behavior modeling. I wasn’t asking, “What should we believe?” I was asking, “What survives mutation, recursion, noise, asymmetry, and adversarial input?”

    This isn’t a difference in emphasis. It’s a complete difference in epistemology.

    I learned early that systems must survive constraint, not argument. In software, in logistics, in simulation—you don’t win with persuasion. You win with computable reliability.

    So when I turned my attention to human systems—law, economics, governance—I carried that constraint-first logic with me. And I started to see clearly: the failure modes of our civilization are not ideological. They are architectural. They result from unverifiable claims, unmeasurable policies, unjustifiable asymmetries, and moral systems too vague to enforce.

    III. Programming as Epistemology

    Marvin Minsky once said that programming is not just a technical skill—it is a new way of thinking. And he was right. Programming rewires your brain. It trains you to:

    • Think in systems of interacting agents.
    • Model causality, not just correlation.
    • Define terms operationally, not rhetorically.
    • Iterate and refactor for resilience under change.
    • Accept only what can be compiled, executed, and tested.

    That’s a fundamentally different mental architecture than that of most philosophers, theologians, or political theorists.

    It’s not about argument. It’s about constructibility.

    And this insight changed everything for me. I stopped looking for compelling stories and started looking for models that didn’t collapse under recursion. My brain stopped thinking in metaphors and started thinking in grammars, schemas, and state transitions.

    This mode of thought is rare in the academy. But it is essential if your goal is not to win an argument—but to engineer a civilization.

    IV. Modeling Human Action from Beginning to End

    Over the course of my career, I’ve modeled:

    • The cognitive inputs to human behavior (perception, valuation, instinct).
    • The economic expressions of that behavior (preferences, trade, institutions).
    • The legal consequences of those behaviors (disputes, resolutions, enforcement).

    This means I didn’t just study one domain. I modeled the entire causal chain:

    1. Cognition →
    2. Incentive →
    3. Action →
    4. Conflict →
    5. Adjudication →
    6. Restitution

    And I noticed something crucial: the same logical structure reappeared at every level.

    That structure was evolutionary computation.

    • Trial and error.
    • Cost and benefit.
    • Variation and selection.
    • Reciprocity and punishment.

    In other words: the universe behaves as a cooperative computation under constraint, and so must any successful human system.

    So I asked the natural next question: Can we model that process at every level of civilization—cognitive, moral, legal, economic, and political? And the answer was yes.

    But no one had done it—because no one had unified those grammars under the same method of operational, testable, decidable reasoning.

    V. Stories vs. Simulations

    Most intellectual traditions are still built around narratives:

    • Plato: allegories.
    • Hegel: dialectics.
    • Rawls: thought experiments.
    • Marx: historical inevitabilities.
    • Even most economists rely on idealized simplifications.

    But I don’t think in narratives. I think in simulations.

    • I model actors.
    • I define constraints.
    • I calculate outcomes.
    • I test for failure modes.

    This is why my work often feels alien to others. I’m not using their grammar. I’m not offering a story. I’m offering a compiler—a machine for deciding moral, legal, and institutional questions under real-world constraints.

    This is why I define truth not as “correspondence” or “coherence,” but as survival under adversarial recursion with no externalities. That is a systems definition of truth. And it forces an entirely new set of constraints on what can be claimed, believed, or enforced.

    VI. What Emerged: A Civilizational Operating System

    What emerged from this lifelong modeling wasn’t a “theory.” It was a constructive logic of human cooperation. A universal language for modeling truth, reciprocity, and decidability.

    I built:

    • A grammar of operational speech.
    • A system of reciprocal insurance.
    • A legal architecture based on testifiability and restitution.
    • An economic model based on bounded rationality under evolutionary constraint.
    • A political model based on institutional decidability rather than discretion.

    I didn’t invent moral philosophy. I engineered moral computability.

    This is what I call Natural Law—not the mystical kind, not the theological kind, but the operational structure of all sustainable cooperation.

    And it works because it obeys the same rules the universe does:

    • Scarcity
    • Entropy
    • Evolution
    • Computation
    • Reciprocity
    • Testability
    • Decidability

    No metaphysics. No utopias. Just the minimum viable grammar of cooperation that does not fail at scale.

    VII. Why It Had to Be Built

    I began to see this clearly in the 1990s. Progressive thought was collapsing into scripted talking points. Conservative thought was collapsing into ineffectual moralizing. And no one—not left, right, or center—was answering hard questions in operational, value-neutral, measurable terms.

    It was obvious what was coming: pseudoscience, institutional capture, epistemic collapse, and eventually civil war. And that’s what we’re living through now.

    So I made a decision. I would build the language of truth and cooperation that our institutions failed to produce.

    Not because I had all the answers. But because no one else was even asking the right questions in the right language.

    That decision cost me wealth, relationships, status—and I don’t regret it. Because the world doesn’t need another ideology. It needs a system of decidability that can constrain all ideologies.

    That’s what I built. That’s what this is. And now, finally, I’m teaching it.

    ·

    http://x.com/i/article/1920370364716363777

     


    Source date (UTC): 2025-05-08 06:55:24 UTC

    Original post: https://twitter.com/i/web/status/1920371940503794090


    Source date (UTC): 2025-05-08 06:55:24 UTC

    Original post: https://twitter.com/i/web/status/1920371940503794090

  • Why Doolittle’s Work Differs From Academic Norm by Curt Doolittle I think of mys

    Why Doolittle’s Work Differs From Academic Norm

    by Curt Doolittle
    I think of myself as a scientist that researches epistemology. I have almost nothing in common with philosophers outside of a very few from the 20th century. Even then I approach their work from the scientific method and in particular the methods of computer science, while retaining loyalty to economics as the equivalent of, and extension of, physics in biology and behavior.
    I’ve often been told my work feels alien, even to those who grasp its depth. And for years, I struggled to explain why. I’m not a traditional philosopher. I’m not a political theorist. I’m not even an economist in the academic sense. And yet, I’ve built what few within those traditions have achieved: a complete, operational system for modeling and governing human cooperation under constraint.
    The reason is simple: I think differently. My training was different. My tools were different. My standards of success were different. I didn’t study ideas to debate them. I modeled systems to see if they could survive. Where others were trying to justify beliefs, I was trying to simulate cooperation at scale under adversarial and evolutionary pressure.
    In this article I’ll try to explain why. Not only to help you understand my work, but to help me explain why it feels, and can be, challenging.
    Most intellectuals are trained in justificatory reasoning. They begin with a belief—human dignity, equality, liberty, justice—and then build arguments to justify those beliefs. They use analogies, metaphors, traditions, and intuitions. This is the dominant method in philosophy, law, ethics, and politics.
    But that was never my method. From early on, I was immersed in constraint systems: relational databases, state machines, object-oriented design, and behavior modeling. I wasn’t asking, “What should we believe?” I was asking, “What survives mutation, recursion, noise, asymmetry, and adversarial input?”
    This isn’t a difference in emphasis. It’s a complete difference in epistemology.
    I learned early that systems must survive constraint, not argument. In software, in logistics, in simulation—you don’t win with persuasion. You win with computable reliability.
    So when I turned my attention to human systems—law, economics, governance—I carried that constraint-first logic with me. And I started to see clearly: the failure modes of our civilization are not ideological. They are architectural. They result from unverifiable claims, unmeasurable policies, unjustifiable asymmetries, and moral systems too vague to enforce.
    Marvin Minsky once said that programming is not just a technical skill—it is a new way of thinking. And he was right. Programming rewires your brain. It trains you to:
    • Think in systems of interacting agents.
    • Model causality, not just correlation.
    • Define terms operationally, not rhetorically.
    • Iterate and refactor for resilience under change.
    • Accept only what can be compiled, executed, and tested.
    That’s a fundamentally different mental architecture than that of most philosophers, theologians, or political theorists.
    It’s not about argument. It’s about constructibility.
    And this insight changed everything for me. I stopped looking for compelling stories and started looking for models that didn’t collapse under recursion. My brain stopped thinking in metaphors and started thinking in grammars, schemas, and state transitions.
    This mode of thought is rare in the academy. But it is essential if your goal is not to win an argument—but to engineer a civilization.
    Over the course of my career, I’ve modeled:
    • The cognitive inputs to human behavior (perception, valuation, instinct).
    • The economic expressions of that behavior (preferences, trade, institutions).
    • The legal consequences of those behaviors (disputes, resolutions, enforcement).
    This means I didn’t just study one domain. I modeled the entire causal chain:
    1. Cognition →
    2. Incentive →
    3. Action →
    4. Conflict →
    5. Adjudication →
    6. Restitution
    And I noticed something crucial: the same logical structure reappeared at every level.
    That structure was evolutionary computation.
    • Trial and error.
    • Cost and benefit.
    • Variation and selection.
    • Reciprocity and punishment.
    In other words: the universe behaves as a cooperative computation under constraint, and so must any successful human system.
    So I asked the natural next question: Can we model that process at every level of civilization—cognitive, moral, legal, economic, and political? And the answer was yes.
    But no one had done it—because no one had unified those grammars under the same method of operational, testable, decidable reasoning.
    Most intellectual traditions are still built around narratives:
    • Plato: allegories.
    • Hegel: dialectics.
    • Rawls: thought experiments.
    • Marx: historical inevitabilities.
    • Even most economists rely on idealized simplifications.
    But I don’t think in narratives. I think in simulations.
    • I model actors.
    • I define constraints.
    • I calculate outcomes.
    • I test for failure modes.
    This is why my work often feels alien to others. I’m not using their grammar. I’m not offering a story. I’m offering a compiler—a machine for deciding moral, legal, and institutional questions under real-world constraints.
    This is why I define truth not as “correspondence” or “coherence,” but as survival under adversarial recursion with no externalities. That is a systems definition of truth. And it forces an entirely new set of constraints on what can be claimed, believed, or enforced.
    What emerged from this lifelong modeling wasn’t a “theory.” It was a constructive logic of human cooperation. A universal language for modeling truth, reciprocity, and decidability.
    I built:
    • A grammar of operational speech.
    • A system of reciprocal insurance.
    • A legal architecture based on testifiability and restitution.
    • An economic model based on bounded rationality under evolutionary constraint.
    • A political model based on institutional decidability rather than discretion.
    I didn’t invent moral philosophy. I engineered moral computability.
    This is what I call Natural Law—not the mystical kind, not the theological kind, but the operational structure of all sustainable cooperation.
    And it works because it obeys the same rules the universe does:
    • Scarcity
    • Entropy
    • Evolution
    • Computation
    • Reciprocity
    • Testability
    • Decidability
    No metaphysics. No utopias. Just the minimum viable grammar of cooperation that does not fail at scale.
    I began to see this clearly in the 1990s. Progressive thought was collapsing into scripted talking points. Conservative thought was collapsing into ineffectual moralizing. And no one—not left, right, or center—was answering hard questions in operational, value-neutral, measurable terms.
    It was obvious what was coming: pseudoscience, institutional capture, epistemic collapse, and eventually civil war. And that’s what we’re living through now.
    So I made a decision. I would build the language of truth and cooperation that our institutions failed to produce.
    Not because I had all the answers. But because no one else was even asking the right questions in the right language.
    That decision cost me wealth, relationships, status—and I don’t regret it. Because the world doesn’t need another ideology. It needs a system of decidability that can constrain all ideologies.
    That’s what I built. That’s what this is. And now, finally, I’m teaching it.


    Source date (UTC): 2025-05-08 06:55:24 UTC

    Original post: https://x.com/i/articles/1920371940503794090

  • Modeling, Constraint, and the Systemization of Civilization by Curt Doolittle I.

    Modeling, Constraint, and the Systemization of Civilization

    by Curt Doolittle

    I. Introduction: An Outsider’s Problem

    I think of myself as a scientist that researches epistemology. I have almost nothing in common with philosophers outside of a very few from the 20th century. Even then I approach their work from the scientific method and in particular the methods of computer science, while retaining loyalty to economics as the equivalent of, and extension of, physics in biology and behavior.

    I’ve often been told my work feels alien, even to those who grasp its depth. And for years, I struggled to explain why. I’m not a traditional philosopher. I’m not a political theorist. I’m not even an economist in the academic sense. And yet, I’ve built what few within those traditions have achieved: a complete, operational system for modeling and governing human cooperation under constraint.

    The reason is simple: I think differently. My training was different. My tools were different. My standards of success were different. I didn’t study ideas to debate them. I modeled systems to see if they could survive. Where others were trying to justify beliefs, I was trying to simulate cooperation at scale under adversarial and evolutionary pressure.

    In this article I’ll try to explain why. Not only to help you understand my work, but to help me explain why it feels, and can be, challenging.

    II. Constraint vs. Justification: The Great Divide

    Most intellectuals are trained in justificatory reasoning. They begin with a belief—human dignity, equality, liberty, justice—and then build arguments to justify those beliefs. They use analogies, metaphors, traditions, and intuitions. This is the dominant method in philosophy, law, ethics, and politics.

    But that was never my method. From early on, I was immersed in constraint systems: relational databases, state machines, object-oriented design, and behavior modeling. I wasn’t asking, “What should we believe?” I was asking, “What survives mutation, recursion, noise, asymmetry, and adversarial input?”

    This isn’t a difference in emphasis. It’s a complete difference in epistemology.

    I learned early that systems must survive constraint, not argument. In software, in logistics, in simulation—you don’t win with persuasion. You win with computable reliability.

    So when I turned my attention to human systems—law, economics, governance—I carried that constraint-first logic with me. And I started to see clearly: the failure modes of our civilization are not ideological. They are architectural. They result from unverifiable claims, unmeasurable policies, unjustifiable asymmetries, and moral systems too vague to enforce.

    III. Programming as Epistemology

    Marvin Minsky once said that programming is not just a technical skill—it is a new way of thinking. And he was right. Programming rewires your brain. It trains you to:

    Think in systems of interacting agents.

    Model causality, not just correlation.

    Define terms operationally, not rhetorically.

    Iterate and refactor for resilience under change.

    Accept only what can be compiled, executed, and tested.

    That’s a fundamentally different mental architecture than that of most philosophers, theologians, or political theorists.

    It’s not about argument. It’s about constructibility.

    And this insight changed everything for me. I stopped looking for compelling stories and started looking for models that didn’t collapse under recursion. My brain stopped thinking in metaphors and started thinking in grammars, schemas, and state transitions.

    This mode of thought is rare in the academy. But it is essential if your goal is not to win an argument—but to engineer a civilization.

    IV. Modeling Human Action from Beginning to End

    Over the course of my career, I’ve modeled:

    The cognitive inputs to human behavior (perception, valuation, instinct).

    The economic expressions of that behavior (preferences, trade, institutions).

    The legal consequences of those behaviors (disputes, resolutions, enforcement).

    This means I didn’t just study one domain. I modeled the entire causal chain:

    Cognition →

    Incentive →

    Action →

    Conflict →

    Adjudication →

    Restitution

    And I noticed something crucial: the same logical structure reappeared at every level.

    That structure was evolutionary computation.

    Trial and error.

    Cost and benefit.

    Variation and selection.

    Reciprocity and punishment.

    In other words: the universe behaves as a cooperative computation under constraint, and so must any successful human system.

    So I asked the natural next question: Can we model that process at every level of civilization—cognitive, moral, legal, economic, and political? And the answer was yes.

    But no one had done it—because no one had unified those grammars under the same method of operational, testable, decidable reasoning.

    V. Stories vs. Simulations

    Most intellectual traditions are still built around narratives:

    Plato: allegories.

    Hegel: dialectics.

    Rawls: thought experiments.

    Marx: historical inevitabilities.

    Even most economists rely on idealized simplifications.

    But I don’t think in narratives. I think in simulations.

    I model actors.

    I define constraints.

    I calculate outcomes.

    I test for failure modes.

    This is why my work often feels alien to others. I’m not using their grammar. I’m not offering a story. I’m offering a compiler—a machine for deciding moral, legal, and institutional questions under real-world constraints.

    This is why I define truth not as “correspondence” or “coherence,” but as survival under adversarial recursion with no externalities. That is a systems definition of truth. And it forces an entirely new set of constraints on what can be claimed, believed, or enforced.

    VI. What Emerged: A Civilizational Operating System

    What emerged from this lifelong modeling wasn’t a “theory.” It was a constructive logic of human cooperation. A universal language for modeling truth, reciprocity, and decidability.

    I built:

    A grammar of operational speech.

    A system of reciprocal insurance.

    A legal architecture based on testifiability and restitution.

    An economic model based on bounded rationality under evolutionary constraint.

    A political model based on institutional decidability rather than discretion.

    I didn’t invent moral philosophy. I engineered moral computability.

    This is what I call Natural Law—not the mystical kind, not the theological kind, but the operational structure of all sustainable cooperation.

    And it works because it obeys the same rules the universe does:

    Scarcity

    Entropy

    Evolution

    Computation

    Reciprocity

    Testability

    Decidability

    No metaphysics. No utopias. Just the minimum viable grammar of cooperation that does not fail at scale.

    VII. Why It Had to Be Built

    I began to see this clearly in the 1990s. Progressive thought was collapsing into scripted talking points. Conservative thought was collapsing into ineffectual moralizing. And no one—not left, right, or center—was answering hard questions in operational, value-neutral, measurable terms.

    It was obvious what was coming: pseudoscience, institutional capture, epistemic collapse, and eventually civil war. And that’s what we’re living through now.

    So I made a decision. I would build the language of truth and cooperation that our institutions failed to produce.

    Not because I had all the answers. But because no one else was even asking the right questions in the right language.

    That decision cost me wealth, relationships, status—and I don’t regret it. Because the world doesn’t need another ideology. It needs a system of decidability that can constrain all ideologies.

    That’s what I built. That’s what this is. And now, finally, I’m teaching it.


    Source date (UTC): 2025-05-08 06:49:08 UTC

    Original post: https://x.com/i/articles/1920370364716363777

  • I was speaking humorously. So yes. 😉

    I was speaking humorously. So yes. 😉


    Source date (UTC): 2025-05-08 06:28:21 UTC

    Original post: https://twitter.com/i/web/status/1920365137065611632

    Reply addressees: @aldomi29

    Replying to: https://twitter.com/i/web/status/1920306041352262047


    IN REPLY TO:

    @aldomi29

    @curtdoolittle Nonsense, the machine thinks like you do

    Original post: https://twitter.com/i/web/status/1920306041352262047

  • It’s not difficult. Its tedious and expensive. We are overloaded at the moment b

    It’s not difficult. Its tedious and expensive. We are overloaded at the moment but we could do it. And it’s worth doing. And there is a financial upside. And the political timing is right.


    Source date (UTC): 2025-05-08 04:20:05 UTC

    Original post: https://twitter.com/i/web/status/1920332855097381197

    Reply addressees: @ItIsHoeMath @leonardaisfunE @ThruTheHayes

    Replying to: https://twitter.com/i/web/status/1920329248843133380


    IN REPLY TO:

    @ItIsHoeMath

    @leonardaisfunny @curtdoolittle @ThruTheHayes Any advice?

    Original post: https://twitter.com/i/web/status/1920329248843133380

  • You’re right to highlight that much of our knowledge—especially sensorimotor, mi

    You’re right to highlight that much of our knowledge—especially sensorimotor, mimetic, and pre-linguistic—is encoded non-verbally. But that doesn’t mean it’s unknowable, only that it’s non-propositional. It’s embodied, procedural, and episodic rather than symbolic.

    The mistake is in assuming that language is the only means of encoding or transmitting knowledge. In my work, I treat language not as a container of truth but as an index into a network of operational sequences—tests, performances, transformations. We don’t need vocabulary for everything. We need operational commensurability—the capacity to represent, replicate, or verify a behavior, transformation, or inference, whether in muscle memory or machine execution.

    AI does make errors when it lacks sufficient operational grounding—when it attempts to infer causality from symbolic correlation rather than from a model of demonstrated, repeatable behavior. This isn’t a failure of AI per se—it’s a limit of any system not yet trained on the relevant operational sequences. Just as a child fumbles before learning to tie shoelaces by repetition, so too does a model without feedback from embodiment or sufficient training data.

    So yes—mimetic, imitative, and procedural learning is foundational. But what we call “language” is simply one layer of the stack. The deeper layer is sequence learning—motor, sensory, symbolic, or otherwise. My system emphasizes testifiability and demonstrated interest precisely to bridge this gap between symbolic and operational knowledge, and to measure whether what’s being claimed can be done, performed, or validated—not merely said.

    Reply addressees: @slenchy @bryanbrey


    Source date (UTC): 2025-05-08 03:38:29 UTC

    Original post: https://twitter.com/i/web/status/1920322385234046976

    Replying to: https://twitter.com/i/web/status/1920320152676995348


    IN REPLY TO:

    @slenchy

    (this is unrelated to previous question, more about LLMs, wondering about “flawless” used in the article)
    What about all the implicit/non-verbal knowledge we have, as living beings? Stuff there’s no vocabulary for, like how specifically to move our bodies to do this or that, which we gained by imitation and/or practice. Isn’t this an unavoidable void in knowledge of anything language-based, which will cause the AI to (occasionally) make mistakes in its reasoning, regardless of how well trained it is?

    Original post: https://twitter.com/i/web/status/1920320152676995348