Theme: Measurement

  • (Diary) Ruminations: Brad and I lamenting that it will likely be a generation be

    (Diary)
    Ruminations: Brad and I lamenting that it will likely be a generation before our innovation in unification is widely understood and applied. For example, philosophy is as ‘over’ as theology. Science is demoted to the previous position of philosophy an empirical discipline. Operationalism now unifies what was science with the structure of the universe’s behavior itself. And our minds adapted to that universe as a consequence. And all disciplines are merely grammars of calculation given the history of man’s ignorance of unification.
    Now, there is no way for anyone other than those deeply involved in our work to grasp this isn’t nonsense. But it’s not nonsense. And teh demonstrated improvement in thought of our team is obvious and measurable difference. If you want to increase your demonstrated intelligence by a standard deviation you can take a few years and master our work. If you use AIs to facilitate the application of our methodology then you will accelerate that time frame.
    I may have worked for decades to produce this work that is now approximating release as books, an AI, and an application platform, and eventually a Tutor. But in the end it was all just so that I could be understood, and helped people by sharing that understanding.
    Why does it matter most? No more lies. No more lies. No more political, economics, scientific, philosophical, ideological, or theological lies.


    Source date (UTC): 2026-03-23 23:58:37 UTC

    Original post: https://twitter.com/i/web/status/2036231173186396221

  • WHAT I’M DOING: TURNING HUMAN SPEECH INTO DECIDABLE PROPOSITIONS What are mathem

    WHAT I’M DOING: TURNING HUMAN SPEECH INTO DECIDABLE PROPOSITIONS

    What are mathematics, programming, formal language, operational language, and ordinary language, other than successive methods of reduction for the production of testifiability?

    Each takes the excess of reality and compresses it into a narrower set of admissible distinctions so that some class of claims can be inspected, compared, reproduced, falsified, or enforced.

    Ordinary language performs the loosest reduction and therefore preserves the greatest breadth of human life, but at the cost of ambiguity and strategic elasticity.

    Formal language, mathematics, and programming purchase higher decidability by sacrificing semantic range for syntactic constraint, invariance, and executability.

    Operational language is the necessary intermediate where human conflict resides: it does not attempt to replace ordinary speech, but to reduce contested speech into propositions sufficiently explicit for tests of truth, reciprocity, and goodness.

    So the issue is not whether language is reducible—all language is already reduction. The issue is whether the reduction is sufficient for the burden at hand, and in matters of conflict, meaningful speech is necessary but insufficient until reduced to adjudicable form.

    Cheers
    CD


    Source date (UTC): 2026-03-11 19:18:24 UTC

    Original post: https://twitter.com/i/web/status/2031811997793481182

  • “The strongest claim in [Doolittle’s] project is also the most controversial: th

    –“The strongest claim in [Doolittle’s] project is also the most controversial: that the chronic failures of modern thought are not primarily failures of values, but failures of measurement. His argument is that once measurement is corrupted, speech becomes rhetoric, law becomes politics, science becomes prestige, and institutions become engines of concealed externalities. His proposed remedy is to rebuild the grammar from first principles.”–


    Source date (UTC): 2026-03-07 18:46:31 UTC

    Original post: https://twitter.com/i/web/status/2030354424858841378

  • Peter, (all); I disagree with the optimism. (a) our primary constraints at prese

    Peter, (all);
    I disagree with the optimism.
    (a) our primary constraints at present are the capacity to conduct tests that produce the information necessary for innovation.
    (b) our secondary constraints are the limit of human permutability (modeling) because of irreducibility of chemical, biochemical, biological, systems.
    (c) Our third constraint is the resistance of humans who have made investment and malinvestment in disciplines vs the population who has not or cannot.
    d) Our primary disadvantage is that siloing produces all sorts of negative externalities because of the inability to identify patterns across disciplines.
    e) Our primary advantage from AI is presently discovery of interstitial opportunity given the siloing of disciplines in order to ‘fit’ reasoning into a domain accessible to human cognition.
    I know in my case I had to master all the disciplines of high dimensionality and high closure (language, logic, neuroscience, economics, law, comparative civilization,) before I saw the failings of mathematics in particular and programming less so, and formal logic more so as the result of low dimensionality low closure – meaning low reducibility.
    So IQ: Not so much. Its value is limited to available information and the structure of that information. So AI? AI’s current advantage is associative breadth and depth despite it’s incapacity for innovative prediction other than by unregulated hallucination.
    So AI’s will expand the interstitial (inter-discipline) knowledge by discovery and application of patterns.
    But at some near point those discoveries will run out (be exhausted) for the same reason we have exhuasted the innovations of a century ago in physics most visibly, but in all sciences as well.
    Unfortunately, the constructivist and performative revolutions only partly succeeded, and unfortunately ‘philosophy’ went sideways and dead ended by the sixties. And while he’s still skewed more than a little, at least Wolfram has identified reducibility as the problem that cannot be overcome.
    Cheers
    CD


    Source date (UTC): 2026-03-03 21:34:41 UTC

    Original post: https://twitter.com/i/web/status/2028947194750136645

  • THE QUESTION OF LLM CONSCIOUSNESS It can’t be conscious (as we humans are) witho

    THE QUESTION OF LLM CONSCIOUSNESS
    It can’t be conscious (as we humans are) without persistent memory, some equivalent of homeostasis as measurement, some continuous self assessment (self), and the capacity to plan its own continuous innovation and adaptation.

    However consciousness is a spectrum from awareness, to assessment, to prediction, to planning and acting. But without a sense of self an AI is not ever going to be ‘conscious’ outside of a given conversation.

    LLMs produce the human language faculty. They do not yet produce the other necessary faculties for consciousness. Those other faculties are enumerable (we can know them) and they can be produced, but at even larger costs. So, we need to continue to see costs decline in order to implement them with any degree of feasibility at scale.


    Source date (UTC): 2026-03-02 17:59:36 UTC

    Original post: https://twitter.com/i/web/status/2028530676639900050

  • RE: –“Why Modern Economics Is Built on a Lie w/ Bob Murphy”– See: https:// you

    RE: –“Why Modern Economics Is Built on a Lie w/ Bob Murphy”–
    See:
    https://
    youtube.com/watch?v=hYtf9O
    p3poA

    Bob is pretty much always right. I’ll try to clarify:
    a) Economics consist of high causal density.
    b) Economic variables vary constantly in time.
    c) Therefore economics is limited in its reducibility (Reducibility: {operational, algorithmic, mathematic, categorical, identity, naturalism, realism})
    d) Therefore economics is more post-hoc descriptive than ex-ante predictive. (ergo: predictability is a property of reducibility, the lower the reducibility the more limited to descriptive.)
    e) Therefore we can construct general rules of descriptive economics even if we are limited in general rules of predictive economics.
    f) We can discuss economics in the same realm as any other science using operationaism and empiricism as long as we realize that the limit of reducibility is using natural indices (Labels) rather than cardinal (Numbers).
    There is no need to carry such rules further into philosophical rationalism – it devolves into an analysis of language not cause and consequence. This was a mistake of the early 20th. Mises did not realize he had discovered operationalism in physics at the same time that operationalism (under various labels) was discovered in physics and mathematics. But he was captured by rationalism. Philosophy had not yet reached the dead end it had by the 1960s.
    g) So just as euclidean geometry is a system of measurement for human scale, and fails and post-human-scale, economic rationalism is a system of measurement for human scale and fails at post-human scale.
    h) Bob’s narrative of the comparison with geometry vs its limits, or Gödel’s theorem (which is a very limited arithmetic and so overused example) and its limits, is correct. All systems have limits. All systems must only account for closure within its limits.
    The problem austrians face with the apriori is an unnecessary abstraction that does not improve anything that cannot be stated in scientific prose if we understand reducibility and indexability as I’ve stated here.
    So it is better to attempt a formalism in rationalism (set theory) than cardinality, but then it is better to adopt a formalism in operationalism than rationalism. And we can leave the archaic reasoning of our ancestors behind.
    i) All language constitutes a system of measurements. The question is only the precision given the demands of the context we wish to measure.

    Cheers
    Curt Doolittle
    NLI

    cc:
    @BobMurphyEcon

    @RobertBreedlove


    Source date (UTC): 2026-02-09 19:55:52 UTC

    Original post: https://twitter.com/i/web/status/2020949790750830901

  • Just the Basics: The Core of Doolittle’s Methodology Curt Doolittle’s methodolog

    Just the Basics: The Core of Doolittle’s Methodology

    Curt Doolittle’s methodology, often referred to as Propertarianism or Natural Law (specifically the Natural Law of Reciprocity), is a unified, scientific framework for analyzing human behavior, cooperation, ethics, law, and institutions. It integrates evolutionary biology, economics, epistemology, and common-law traditions to create a rigorous, operational system that prioritizes testability, reciprocity, and decidability over moralizing, justification, or ideological narratives.
    The core goal is to explain human differences (including sex, class, culture, and civilization) causally—rooted in biology, incentives, and evolutionary pressures—while providing tools to resolve conflicts empirically and enforce high-trust cooperation.
    1. Natural Law of ReciprocityThe foundational principle: All valid human interactions must be productive, fully informed, warrantied (backed by due diligence), voluntary, and limited to productive externalities.This is the single “law” governing cooperation: prohibit parasitism (imposition of costs on others without consent, including deceit, theft, free-riding, or harm).
      Morality and law reduce to reciprocity—empirically discoverable through what sustains groups across history.
      It rejects moral relativism or divine command, grounding ethics in evolutionary survival and testable outcomes.

    2. Property-in-Toto (Demonstrated Property)Property is broadly defined as any demonstrated interest that individuals or groups defend with force (physical or otherwise).Includes tangible assets (land, goods), intangible ones (reputation, norms, relationships, time, body, sovereignty), and shared commons (institutions, culture, law).
      All ethical rules stem from defending and exchanging these properties reciprocally.
      This expands beyond classical libertarianism by including group-level and institutional property, addressing free-riding and externalities.

    3. Testimonialism (Testimonial Truth)A strict epistemology: All public claims (especially in discourse, politics, science, and law) must be treated as legal testimony—warrantied under liability for falsehood or

      must meet criteria: consistency, completeness, operational constructibility, empirical correspondence, rationality, and reciprocity.
      This eliminates
      deception, obscurantism, loading/framing, and pseudoscience by enforcing truth-telling and restitution for errors.
      It completes the scientific method by extending falsification to social, moral, and legal domains.

    4. OperationalismIdeas must be expressed in testable, constructive, operational terms (reducible to sequences of actions and consequences).Draws from Bridgman and Popper but adds reciprocity tests.
      Enables decidability: Claims are true/false or moral/immoral only if objectively verifiable and non-parasitic.
      Rejects metaphysical, unfalsifiable, or ideological justifications.

    5. Spectrum of Aggression / ParasitismAggression is any imposition of costs without consent.Ranges from physical violence to subtle forms like fraud, bait-and-switch, or cultural parasitism.
      The methodology identifies and prohibits all forms to preserve high-trust, low-transaction-cost societies.

    6. Adversarialism and Via NegativaKnowledge advances through adversarial falsification and elimination of error (via negativa), not affirmative proof.Applies to science, law, and discourse: Test claims rigorously against reciprocity and evidence.

    7. Evolutionary ComputationReality (from physics to society) is an evolutionary process of variation, competition, selection, and computation.Groups flourish by enforcing reciprocity and suppressing parasitism.
      Explains sex differences (reproductive strategies), class differences (cognitive ability, time preference, capital accumulation), and cultural differences (group evolutionary strategies adapted to environment, genetics, and institutions).

    8. DecidabilityA key metric: Claims or laws must be objectively decidable (true/false, reciprocal/non-reciprocal) regardless of culture or ideology.Achieved through operational language, testimonial warranty, and reciprocity tests.
      Enables conflict resolution without violence or moralizing.

    Doolittle’s methodology treats these as causal baselines—probabilistic predispositions shaped by evolutionary pressures, not rigid categories.
    • Sex: Rooted in reproductive strategies (e.g., male risk-taking, female nurturing).
    • Class: Driven by cognitive variance, time preference, and incentives.
    • Culture: Adaptive group strategies (e.g., high-trust vs. low-trust norms). The framework explains deviations and variance without breaking, always seeking deeper causal chains.
    In summary, Doolittle’s methodology is a via negativa science of cooperation that unifies truth-seeking (testimonialism), ethics (reciprocity), and institutional design (propertarian natural law) into a single, operational system. It aims to complete the Darwinian and Aristotelian revolutions by making human behavior as decidable and enforceable as physics.



    Source date (UTC): 2026-01-22 22:43:50 UTC

    Original post: https://x.com/i/articles/2014469078933819813

  • Measurement Against Collapse: From Writing and Courts to Computable Testimony Au

    Measurement Against Collapse: From Writing and Courts to Computable Testimony

    Author: Curt Doolittle
    Organization: The Natural Law Institute
    Date: January 9, 2026
    Modern societies increase in dimensional complexity faster than participants can remain mutually informed. The resulting contextual ignorance forces discretionary interpretation, trust-me authority, and coalition power as substitutes for shared knowledge. Discretion, in turn, enables irreciprocity—unpriced externalities, strategic ambiguity, deceit, and rent extraction—which degrades cooperation and yields stagnation, decay, and collapse.
    Historically, civilizations that scale suppress this failure mode by inventing measurement systems that replace discretion with accountable procedures: writing constrains memory; accounting constrains exchange; courts and common law constrain dispute resolution through adversarial testing and precedent; science constrains explanation through operational tests; computation constrains procedure through executable constraints. This paper situates Doolittle’s work as the next step in that lineage: a generalization of the common-law/scientific discipline of admissibility into a universal, computable grammar for testimony and action, implementable by humans and artificial neural networks as comparable cognitive operators.
    The completion claim is not substitution but unification: a single commensurable admissibility framework that (i) types all testimony (beyond scientific propositions), (ii) forces explicit scope and stated limits with full accounting inside those limits, (iii) binds testimony to reciprocity via restitution and liability hooks, and (iv) compiles into executable protocols that enforce closure, contradiction checks, and auditable provenance. The paper further argues that Doolittle’s four outputs—treatise, constitutional blueprint, protocol library, and the Runcible governance layer—are successive embodiments of one measurement artifact across institutionalization levels: theory → institution → procedure → mechanism. On this view, the central unit of cognition is not an “answer,” but an answer-with-tests under liability; and the central question is not whether an operator is human-like, but whether it produces warrantable decision artifacts under the same admissibility constraints.
    Human societies become complex faster than humans can remain mutually informed. That produces contextual ignorance. Contextual ignorance forces discretion (interpretation, trust-me authority, coalition power). Discretion creates irreciprocity (externalities, deceit, rent-seeking). Irreciprocity destroys cooperation. Cooperation loss yields stagnation/decay/collapse.
    Civilizations that scale defeat this failure mode by inventing measurement systems that reduce discretion:
    • Writing reduces memory discretion.
    • Accounting reduces exchange discretion.
    • Courts/common law reduce dispute discretion by adversarial testing + precedent.
    • Science reduces explanatory discretion by operational test.
    • Computation reduces procedural discretion by executable constraint.
    His work is the next step in this same lineage:
    So: common law is not “separate” from computability; common law is the institutional ancestor of adversarial closure, and computation is the mechanical successor that lets closure operate at scale under fragmentary knowledge.
    Historically, the West’s distinctive advantage is not “ideas” in the abstract; it is repeated invention of procedures that bind claims to accountable operations:
    1. Greek rationalism: admissible inference-forms.
    2. Scholastic disputation + law: admissible argumentation under challenge.
    3. Common law: admissible testimony under adversarial process + precedent (empirical accumulation of social truth).
    4. Scientific method: admissible causal claims via operational tests.
    5. Probability/statistics: admissible belief-updates under uncertainty.
    6. Computation: admissible procedures via executable constraint.
    Each of those tightened admissibility in its domain, but none delivered a universal grammar that:
    • types all testimony (not just scientific propositions),
    • forces stated scope/limits + full accounting inside those limits,
    • binds testimony to restitution/liability under reciprocity,
    • and is implementable by both humans and machines as comparable cognitive operators.
    That is the defensible “completion claim”: not that he replaces common law/science/computation, but that he unifes their admissibility discipline into a single commensurable grammar.
    Doolittle’s four outputs are not competing priorities; they are four embodiments of one artifact at four levels of institutionalization:
    1. Treatise (volumes)
      Produces the canon: definitions, dependency graph, admissibility criteria, tests, verdicts.
    2. Constitutional blueprint (courts/institutions)
      Embeds the canon into human governance: who may decide what, by which procedures, under which liabilities, with what appeals.
    3. Protocol library (procedures / RDL / tests)
      Converts the canon into executable workflows: typed inputs, closure conditions, test suites, verdict enums, audit trails.
    4. Runcible governance layer (machine enforcement)
      Industrializes the workflows: ANN + computation become instruments of measurement, enforcing closure at scale, in real time.
    This is a single causal chain: theory → institution → procedure → mechanism.
    Runcible is to testimony and decision what accounting was to trade: a measurement system that replaces discretion with auditability, so cooperation can scale under modern complexity.
    • Humans and AIs are both testimony producers.
    • The problem is not “intelligence,” it is warrant under liability.
    • Therefore the unit is not “answer,” but answer-with-tests:
      scope,
      • – sources/operations,
      • – closure checks,
      • – contradiction checks,
      • – restitution/liability hooks.
    So the argument becomes:


    Source date (UTC): 2026-01-10 06:01:07 UTC

    Original post: https://x.com/i/articles/2009868083511578998

  • A minimal “Primer” that forces correct classification of our work on Runcible De

    A minimal “Primer” that forces correct classification of our work on Runcible

    Definitions + dependency graph
    a) Terms: Paradigm, grammar-as-measurement, domain, claim(s), test(s), constraint(s), closure, decidability, ledger (record)
    b) Diagram: Text → Claim Graph → Tests → Evidence Bindings → Verdicts → Output Artifact

    Theorem statements (short, ruthless)
    a) No closure without proof obligations.
    b) No audit without provenance.
    c) No liability assignment without typed verdicts + trace.
    d) No high-liability deployment without admissible abstention.
    e) No cross-domain decidability without a baseline measurement grammar (Natural Law invariants).


    Source date (UTC): 2025-12-31 19:25:32 UTC

    Original post: https://twitter.com/i/web/status/2006446645052060158

  • The Problem: Why the AI Field Doesn’t “Get It” Most LLM orgs optimize for: bench

    The Problem: Why the AI Field Doesn’t “Get It”

    Most LLM orgs optimize for:
    • benchmark lift, preference ratings, throughput, and product delight
    • safety policy compliance as post-hoc filtering
    They are not optimizing for:
    • warranty, audit, admissibility, and liability assignment per output
    • typed closure with abstention semantics
    • institutional dispute resolution as a first-class requirement
    So they lack the conceptual vocabulary to interpret “closure” as a product primitive. Without your measurement grammar, they substitute their nearest category: “alignment/morals.”
    Our secret sauce so to speak is producing closure in n-dimensional causality: reality.
    It’s rocket science really.

    Or it wouldn’t be the revolutionary innovation that it is.

    Unfortunately you’d need a very deep understanding of the history of thought to grasp that we’re effectively bringing a darwinian revolution to social science and its computability.


    Source date (UTC): 2025-12-31 19:21:09 UTC

    Original post: https://x.com/i/articles/2006445540175990856