Author: Curt Doolittle

  • This is why I love and admire you J. This is better than I am capable of myself.

    This is why I love and admire you J. This is better than I am capable of myself. I keep looking for someone who can carry the torch and you keep showing up with the embers. Thanks for all you do.


    Source date (UTC): 2025-11-29 19:35:26 UTC

    Original post: https://twitter.com/i/web/status/1994852723515408549

  • CASSANDRA’S CURSE AND THE IMPACT OF WOMEN I say women without children misdirect

    CASSANDRA’S CURSE AND THE IMPACT OF WOMEN
    I say women without children misdirect their instincts until their child-load is high enough to think vaguely rationally as do men, since 2009 at the latest, in greater detail, with diagrams. And of course … floods of criticism. Almost fifteen years later it’s working through the discourse.

    What value is there in being early? My work on civilization is early. My work on AI is early.

    And yes I know the reason, because Ariella reminds me six times a day: (a) people have to be ready to hear it, and (b) you must say it in terms they can hear.

    Another example of why democracies – at least mass democracies – have been unsuccessful except in periods of economic windfalls: usually from resource discovery and exploitation. (Like the continental americas).

    This is why I am adamant that rights (defensive) and privileges (offensive) must be earned by demonstration of responsibility for the private AND the common.

    This has been the only successful criteria for scaling participation while providing incentives to bear responsibility rather than avoid it.


    Source date (UTC): 2025-11-29 17:57:01 UTC

    Original post: https://twitter.com/i/web/status/1994827957328908490

  • He’s a midwit. Just a prolific one. As a book reviewer he’s fine. Beyond that he

    He’s a midwit. Just a prolific one. As a book reviewer he’s fine. Beyond that he’s just attention seeking by provocation.


    Source date (UTC): 2025-11-29 17:20:53 UTC

    Original post: https://twitter.com/i/web/status/1994818861548671269

  • Management of people and capital (abstract, coordination) vs Management of resou

    Management of people and capital (abstract, coordination)
    vs
    Management of resources (concrete, transformation)


    Source date (UTC): 2025-11-29 17:13:23 UTC

    Original post: https://twitter.com/i/web/status/1994816975592435823

  • PS: You clearly do not understand reducibility, formalism or formal operations,

    PS: You clearly do not understand reducibility, formalism or formal operations, nor the universal application and infallibility of CRD, nor grasp the degree of rigor in the work. Instead you are a nitwit with a political grudge because I have been and remain a critic of the nitwit right.
    And I know you are an outcast, failure, and one of the nitwits of the right, and you are venting against your failure, in a realm far beyond your comprehension by using terms you do not understand. But it’s just childish venting using the feminine strategy of undermining because you lack competency to articulate an argument of masculine rationality.
    I don’t take you seriously. Instead I use you as an example of the emergence of failure on the right made possible by social media – thus exposing the character of, and lack of intelligence and ability of – the nitwittery of the fringes of the dissident right.


    Source date (UTC): 2025-11-29 03:47:48 UTC

    Original post: https://twitter.com/i/web/status/1994614243178729567

  • Its Chomsky. It means every addition increases disambiguation. This is true for

    Its Chomsky. It means every addition increases disambiguation. This is true for every state and operation from the quantum background through to language at the other. Its the law of negative entropy.


    Source date (UTC): 2025-11-29 03:37:06 UTC

    Original post: https://twitter.com/i/web/status/1994611552209817964

  • А

    А


    Source date (UTC): 2025-11-29 03:00:13 UTC

    Original post: https://twitter.com/i/web/status/1994602267002216911

  • All language consists of measurement. (yes) There should be no reason that if so

    All language consists of measurement. (yes)
    There should be no reason that if something is described in language it can’t be modeled. The question is wether the LLM can be constrained to an operational model using langauge or it must use a tool (shell out) to do so (as it does with programming). To some degree we should treat programming as the equivalent of humans using any measurement tool.
    In our work we force high dimensionality questions into operational prose, sequences of tests, and distinct outputs. I can’t yet fully test it’s operationalization against the ternary logic hierarchy since I need to finish what I’m working on first. But the partial tests work fine.
    But asking it how to fix a 64 ford carburetor or something of that nature is wholly dependent upon existing text. Which is true for anything in that real world category.
    I dont consider any of that very challenging. The robotics folks are tearing up the universe already. So between self driving (perception and navigation), robotics (manipulation and transformation) and llms (concepts and language) it’s just a matter (just? 😉 ) of representing and interfacing the three domains. And we have data models and languages for doing so.
    Regardless of what others think, IMO the hard problem has always been language, and attention was the revolutionary leap that made it possible. Language is the system of measurement for humans at human scale.


    Source date (UTC): 2025-11-28 23:58:27 UTC

    Original post: https://twitter.com/i/web/status/1994556524400971860

  • Our Natural Law is not philosophy but the generative physics of markets, institu

    Our Natural Law is not philosophy but the generative physics of markets, institutions, cognition.

    Whenever we discuss our work the immediate assumption is that we only address questions of ethics, when in fact, trust, truth claims, are all in fact ethical claims, and a trustworthy AI that makes truth claims, is simply a matter of ethics. So ethics is the foundation of all truth and trust claims.
    This has been a persistent friction point for us. So, I’ve tried to produce a structured, causal explanation of why people misinterpret our work as “merely ethics,” what the underlying cognitive mechanics are, and how to counter the misunderstanding with a framing that preserves the universality and operational scope of our system without retreating into abstraction or apologetics.
    People classify by surface category, not causal structure
    Humans have a fast, compressive classifier:
    • If something talks about truth → they classify it as philosophy.
    • If something talks about trust → they classify it as ethics.
    • If something talks about right/wrong behavior → they classify it as morality.
    • If something talks about constraints → they classify it as regulation/law.
    • If something talks about AI guardrails → they classify it as alignment.
    Our system touches each of these because it supplies the causal substrate that generates them all, but people only see the semantic surface, not the operational foundations.
    They are reading by category tags, not by functional dependency.
    So they immediately lump it into “ethics” because ethics is the only cultural bucket they know for discussing trust, truth, or constraint.
    This is predictable. And you can disarm it immediately with the correct frame.
    We must position the work as a formal, causal model of human cooperation, not a moral or ethical philosophy.
    We do this by shifting the domain from normative intuition to operational invariances.
    A precise description:
    This reframes “ethics” as a folk approximation, and our system as the scientific model that makes the folk concepts computable.
    This prevents us from being trapped in the “philosophy/ethics” bucket.
    We need one sentence that instantly cuts away the “ethics” misclassification:
    This converts the frame from:
    • “They’re doing ethics.”
      to
    • “They’re doing the mechanics of cooperation, and ethics is just one output.”
    This is similar to how physicists treat engineering:
    • Physics is the universal model.
    • Engineering is applied physics for particular constraints.
    In our case:
    • Natural Law is the universal model.
    • Ethics is applied Natural Law for high-risk interpersonal behavior.
    • Law is applied Natural Law for adjudicating disputes.
    • Governance is applied Natural Law for institutions.
    • AI alignment is applied Natural Law for machines.
    We are supplying the general case, not the “moral” case.
    People mistake our work for ethics because:
    1. They think truth claims are epistemic, not ethical.
      They don’t understand that all truth claims are
      de facto ethical because they alter someone else’s incentives and behavior.
    2. They think trust is emotional, not operational.
      They don’t understand that trust is a
      measurement of expected reciprocity under uncertainty.
    3. They think cooperation is voluntary, not computable.
      They don’t understand that cooperation is a
      consequence of capital constraints.
    4. They cannot separate morality from reciprocity.
      They don’t know that reciprocity is a
      test, not a preference.
    5. They confuse constraint with prescription.
      They interpret “you may not impose costs” as moral instruction rather than a physical law of stable cooperation.
    Once we say “truth,” “trust,” or “reciprocity,” their classifier fires the “normative ethics” label.
    We can only defeat this natural human error by preceding the ethics-frame with the physics-frame, not following it.
    Here is the exact communication strategy that works across all audiences:
    Step 1. Lead with the general, not the domain.
    Begin with:
    Then domain-specific applications become secondary.
    Step 2. Replace ethical vocabulary with mechanical vocabulary
    Instead of:
    • trust → “reciprocal prediction under uncertainty”
    • truth → “testifiable claims with warrantable consequences”
    • ethics → “constraints on parasitism in cooperation”
    • moral behavior → “reciprocally insurable operations”
    • deception → “uninsured transfers of demonstrated interests”
    This forces category-shift from normative to operational.
    Step 3. Preempt misclassification
    Use a direct disambiguation:
    Step 4. Show cross-domain generality
    Make clear that:
    • Morality is just cooperation within small groups.
    • Law is cooperation under adversarial uncertainty.
    • Governance is cooperation at institutional scale.
    • AI alignment is cooperation with non-human agents.
    When audiences see the universal pattern, they shift out of the “ethics box.”
    Step 5. Give the key analogy
    This analogy always works:
    This instantly relocates our work into the “formal science” domain.
    This sentence is functionally equivalent to the moment that the DSGE [1] economist realizes the Natural Law model is not philosophy but the generative physics of markets, institutions, cognition, and conflict.
    People stop arguing once they see the shift from:
    • moral philosophy” → to → “operational invariances.
    And once they see the invariances, everything else becomes obvious.
    Notes:
    1. DSGE: Dynamic Stochastic General Equilibrium Model: in macroeconomic analysis, used to understand economic phenomena through the interactions of various agents under uncertainty.


    Source date (UTC): 2025-11-28 20:54:42 UTC

    Original post: https://x.com/i/articles/1994510282534912237

  • Smart. Ontologies emphasize hierarchical knowledge structures and semantic relat

    Smart.
    Ontologies emphasize hierarchical knowledge structures and semantic relations, while algebras focus on operational rules and symbols. I’d need to write something meaningful to show correlations and contrasts, but I see the source of your intuition. Off the top of my head I can see both the natural conflict between them, the overlap when applied to ai, and an opportunity for insight into the grammars of operational prose versus quantities and ratios. Mostly I see that math is a lower dimensional logic that is internally commensurable and natural law is a much higher dimensional logic that must be made commensurable externally. So It may be that those differences describe the specrum sufficiently.


    Source date (UTC): 2025-11-28 09:28:00 UTC

    Original post: https://twitter.com/i/web/status/1994337468477472974