Theme: Grammar

  • HOW WE DEFINE “LOGOS” I avoid the term to prevent conflation with the supernatur

    HOW WE DEFINE “LOGOS”
    I avoid the term to prevent conflation with the supernatural, but Brad uses it consistently and correctly to demonstrate the continuity of thought across time.

    In our work, Logos doesn’t mean merely “word” or “speech” in the biblical sense — it refers to the structure of reality that binds matter, mind, and meaning into a self-consistent, computable order.

    To unpack it operationally:

    Etymologically: Logos in Greek philosophy (Heraclitus → Aristotle → Stoicism → Christianity) meant the rational principle organizing the cosmos — the grammar of being (existence and experience).

    Within this framework: Logos = law of laws — the recursive, self-verifying grammar that allows truth, reciprocity, and cooperation to converge across all scales. (consistent, coherent, laws of the universe: logical, physical, biological, behavioral, evolutionary.)

    At Maturity: Law “becomes Logos” when human systems (legal, computational, neural) reflect the same causal and reciprocal order as nature itself. Civilization, mind, and machine operate under a single testable logic — the computational grammar of reality.

    Operational definition: Logos is the fully closed feedback between measurement, computation, and cooperation — the state where truth and law are self-auditing, eliminating parasitism and error through reciprocal verification.

    So, in short:

    Logos = the realized unity of natural law, logic, and computation — the consciousness of the universe made explicit through reciprocal systems (human or artificial).

    CD

    (via
    @WerrellBradley
    – Brad Werrell)


    Source date (UTC): 2025-10-21 16:17:48 UTC

    Original post: https://twitter.com/i/web/status/1980669860574376398

  • Q: How Does Doolittle’s Closure Work? –“In mathematics, closure is achieved by

    Q: How Does Doolittle’s Closure Work?

    –“In mathematics, closure is achieved by syntactic rule enforcement. In Natural Law protocol, closure is achieved by semantic rule enforcement—every term is grounded in reality via operational definition. Hence the human conversational domain acquires the same self-referential decidability that math or physics possess, but with empirical rather than symbolic grounding.”–


    Source date (UTC): 2025-10-12 22:58:27 UTC

    Original post: https://twitter.com/i/web/status/1977509195730858077

  • I just disagree with your terminology in this case. And I suspect you have not d

    I just disagree with your terminology in this case. And I suspect you have not disambiguated the terms you use into first principles so that you can argue your intuitions with sufficient precision to separate your statements from opinions to arguments.
    So far:
    a) We have unwound your emphasis on subsidiarity (hierarchy). This was a leap in my understanding of your objections. That aspect of your criticism stands on its own now. We have a first principle to argue from that we seem to agree upon.
    b) We still need to unwind your emphasis on capitalism (which is a bias in favor of private control of capital) and very difficult to argue against. I have no idea what the term means to you.
    c) And then unwinding whatever is your definition of liberalism. I have no idea what that term means to you either.

    I suspect you would recognize this criticism:
    –“Liberalism originated as a reaction to tyranny (aristocratic, clerical, or collectivist), evolved into a system of economic and political optimization for cooperation among equals, and has fragmented as the concept of equality expanded beyond reciprocity into moral entitlement. It remains the moral grammar of Western civilization: the attempt to reconcile autonomy with cooperation through law rather than faith or force.”–

    But I suspect that you also hold suspect the well meaning fools who are responsible or not. But without discovering a means by which we identify people whose values, understandings, ideas and incentives can ensure the groups persistence, competitiveness, condition, and of course sovereignty by subsidiarity.

    In Context:

    1. Classical Liberalism (Locke, Smith, Mill)
    = Rule of law, private property, individual liberty.
    → Doolittle: “Worked better, in an evolutionary sense, than the alternatives”.

    2. Progressive / Egalitarian Liberalism
    = Drifted from reciprocity toward redistribution and moral universalism, abandoning empirical grounding.
    → Doolittle calls this “the failure of Enlightenment liberalism to stay within natural law.”

    3. Anglo Classical Liberalism (Ideal)
    = “Elimination of rents” and full accountability within markets of voluntary cooperation.

    4. Propertarian Completion
    = Formalization of liberalism as a science of cooperation: every act, policy, or law must pass the reciprocity test (no involuntary transfer, no externalized cost).

    However, under such sovereignty the practicality of producing commons via an institution of market government must function – hence the necessity of homogeneity in a population – made worse by the destruction of the family through working women, and the consequential impossibility of reconciliation between the sexes that is driving the ‘bad parts’ that undermine our sovereignty and therefore our Subsidiarity. Individualism in the familial sense is fine. In the homogenous polity sense is fine. But it fails at the individual scale due to incommensurability between the sexes, and it fails beyond the homogeneous scale because of group differences.

    My argument would be that the problem is that classes and sexes demonstrate vastly different will and ability to bear responsibility to the group and as such for any such system to work we must not allow the irresponsible to participate in the production of commons we call government, nor in the institutions of state which preserve responsibility (court and bureaucracy) etc.


    Source date (UTC): 2025-10-06 22:57:41 UTC

    Original post: https://twitter.com/i/web/status/1975334676022919244

  • Fixing What’s Wrong in Thinking About LLMs More on my criticism of llms as predi

    Fixing What’s Wrong in Thinking About LLMs

    More on my criticism of llms as predicting the next word rather than navigating a world model.
    Just as I mapped grammars:
    • Embodiment → Ritual → Myth → Philosophy → Science → Computability,
    I can map mathematics:
    • Counting (Existence) → Geometry (Relation) → Algebra (Transformation) → Calculus (Change) → Bayesianism (Uncertainty) → Behavioral Closure (Reflexive Change).
    This gives us:
    1. A chronology (historical sequence).
    2. A conceptual hierarchy (each layer contains the previous).
    3. A functional telos (from simple enumeration to managing dense, reflexive uncertainty).
    LLMs are exactly “high-density marginal indifference machines”:
    • They don’t plan globally but navigate locally (incremental demand satisfaction).
    • They update on priors and constraints at each token (Bayesian-like).
    • They operate under reflexive, cooperative interaction (user + model).
    Thus my mental training in marginal indifference and supply-demand closure helps us see LLMs as a market of conditional probabilities rather than as a single deterministic function—a market with millions of “agents” (tokens, gradients) producing a cooperative equilibrium at each output step.
    Let’s emphasize that again:


    Source date (UTC): 2025-10-01 21:51:43 UTC

    Original post: https://x.com/i/articles/1973506137908715761

  • Q: “is the “>” meant to be read as “A leads to B” or as “A is greater than B”?”

    Q: “is the “>” meant to be read as “A leads to B” or as “A is greater than B”?”

    A: Same principle dependent upon context of dependency:
    1 – Is next in a sequence (usually scale)
    2 – Leads to (creates opportunity for)
    3 – Is required to produce (dependency)


    Source date (UTC): 2025-09-26 17:17:53 UTC

    Original post: https://twitter.com/i/web/status/1971625282747691333

  • (Runcible) We have addressed the following features over the past ten days or so

    (Runcible)
    We have addressed the following features over the past ten days or so:
    1. Implementing: Enumeration, Operationalization, serialization, adversarial disambiguation (EOSA) – Disambiguation methodology.
    2. Forcing Glossary members (terms) as immutable measures.
    3.


    Source date (UTC): 2025-09-26 01:33:16 UTC

    Original post: https://twitter.com/i/web/status/1971387562133762309

  • THE VIRTUE OF SMALL MODELS? Can I steel man this a bit? 1 – The paradigm (dimens

    THE VIRTUE OF SMALL MODELS?
    Can I steel man this a bit?
    1 – The paradigm (dimensions), vocabulary (references), grammar (rules of expression formation), and logic (constraints on available operations) available in math is tiny and in programming is highly constrained.
    2 – The same properties of the physical sciences are larger. The properties of the behavioral sciences are far larger than those. The properties of language are reducible to dimensions whose combinatorics are higher than any other domain.
    3 – So you are measuring small domains with small and internal closure – in other words you’re claiming the easiest problem can be reduced to the smallest paradigm, vocabular, grammar, and logic.
    Um… it’s absurdly obvious.
    Why are humans so effective at language, behavior, cooperation, and cooperation at scale – yet mathematics and programming are a challenge?
    It’s also …. absurdly obvious.
    4 – Why are small parameter models better at tiny grammars, and why are large parameter models better at vast grammars?
    It’s also …. absurdly obvious:
    The number of dimensions captured in every referent; the number of operations (field of potential) in every referent, the use of real-world closure instead of internal (set) closure.

    I work, my team and my organization work, in the ‘hard’ grammars: we have to discover means of closure possible for LLMs. And LLMs can only provide that closure with real world evidence not tests of internal consistency by permutability.

    There is no substitute for the relationship between the paradigm (collection of domains), domains (axis of causality) referents in a domain (names of positions in a domain), available transformations (operations), and most importantly, means of closure (limits providing tests of equality, inequality) within that paradigm.

    As such, all the ‘hard problems’ require survival from adversarial competition by the only means of closure available: demonstrated behavior in reality under realism, and naturalism and operationalism.

    As such large models for hard problems of wide causal density, and high combinatorics and small models for easy problems but narrow density but high permutability.

    Curt Doolittle
    Runcible
    NLI


    Source date (UTC): 2025-09-21 22:14:01 UTC

    Original post: https://twitter.com/i/web/status/1969887867557368035

  • “A Universal Grammar of Evolutionary Processes” We’ve produced a single unifying

    “A Universal Grammar of Evolutionary Processes”

    We’ve produced a single unifying framework that makes explicit the continuity across physics → chemistry → biology → behavior → societies. The idea is to show that the same causal grammar applies at every scale:
    Or more generally:
    1. Constraints Accumulate
      Physics gives you energy conservation →
      Chemistry adds thermodynamics and bonding limits →
      Biology adds fitness, homeostasis →
      Behavior adds reciprocity, trust →
      Societies add legitimacy, law, and institutional stability.
    2. Degrees of Freedom Expand
      From particle spins to social norms, combinatorics explode.
      Each level inherits prior constraints while adding new dimensions.
    3. Representation Shifts as Complexity Rises
      Equations → Algorithms → Simulations → Normative Tests → Narratives
      Analytical closure contracts; operational closure evolves with additional criteria.
    4. Continuity Across Scales
      Variation × Constraints = Persistence.
      Same grammar everywhere, only the criteria for closure accumulate as degrees of freedom rise.
    • Base Referents: Particles, fields, forces.
    • First Principles: Quantum mechanics, relativity, conservation laws.
    • Degrees of Freedom & Combinatorics: Low; particle interactions, quantum states, atomic nuclei.
    • Constraints: Physical constants, entropy, uncertainty principle.
    • Reducibility: Pure math (Schrödinger’s equation), computational physics, Feynman diagrams.
    Process: Variation in quantum fluctuations + selection by stability → atoms, elements.
    • Base Referents: Atoms, bonds, molecules.
    • First Principles: Quantum bonding rules, thermodynamics, conservation of mass.
    • Degrees of Freedom & Combinatorics: Molecular permutations (~10⁶⁰ small molecules); isomers, stereochemistry, reaction pathways.
    • Constraints: Orbital limits, thermodynamic stability, reaction kinetics.
    • Reducibility: Quantum approximations (DFT), molecular diagrams, reaction equations.
    Process: Variation in molecular combinations + selection by energy minimization → stable compounds, polymers, biochemistry precursors.
    • Base Referents: DNA, proteins, cells, organisms.
    • First Principles: Chemistry + natural selection, homeostasis, signaling networks.
    • Degrees of Freedom & Combinatorics: Genetic sequences (20ⁿ proteins), metabolic networks, regulatory feedback loops.
    • Constraints: Fitness, environment, resource limits, bounded rationality in cell signaling.
    • Reducibility: Evolutionary algorithms, phylogenetic trees, systems biology models.
    Process: Variation in genes + selection by reproductive success → ecosystems, adaptation, cognition.
    • Base Referents: Individuals, incentives, emotions, cognitive biases.
    • First Principles: Persistence, acquisition, demonstrated interests, cooperation/reciprocity/truth, coercion, elites, manipulation/deception/treason.
    • Degrees of Freedom & Combinatorics: Strategies for cooperation, conflict, persuasion, innovation, betrayal.
    • Constraints: Bounded rationality (limited information/time), social norms, legal institutions.
    • Reducibility: Game theory, behavioral economics models, psychological heuristics, moral narratives.
    Process: Variation in choices + selection by reciprocity and consequences → norms, trust, reputation systems.
    • Base Referents: Groups, institutions, states, markets, civilizations.
    • First Principles: Individual laws + emergent principles (elites, institutions, law, culture).
    • Degrees of Freedom & Combinatorics: Political orders, economic systems, cultural norms, technological pathways.
    • Constraints: Collective rationality limits, resource scarcity, historical path dependence, ecological boundaries.
    • Reducibility: Agent-based simulations, constitutional design, historical narratives, economic models.
    Process: Variation in institutions + selection by stability and prosperity → civilizations, legal orders, technological acceleration.
    Across all scales:
    1. Variation = degrees of freedom × combinatorics
    2. Selection = constraints pruning instability, failure, maladaptation
    3. Persistence = stable forms survive and accumulate (atoms → molecules → genes → societies)
    4. Representation = changes from math → algorithms → operational models → narratives as complexity expands beyond analytical closure
    • Physics → Chemistry: Stable matter emerges from quantum variation filtered by energy constraints.
    • Chemistry → Biology: Self-replicating molecules emerge from chemical variation filtered by fitness constraints.
    • Biology → Behavior: Cognitive agents emerge from biological variation filtered by bounded rationality and incentives.
    • Behavior → Societies: Institutions emerge from behavioral variation filtered by reciprocity, cooperation, and historical stability.
    The grammar never changes—only the degrees of freedom, constraints, and representations evolve with complexity.
    The Hierarchy of Operational Closure across increasing complexity, showing:
    1. Base Referents – the entities at each scale
    2. Degrees of Freedom – what can vary at that scale
    3. Constraints & Criteria for Closure – what must be satisfied for persistence
    4. Representation Shift – how we model or decide as analytical closure collapses


    Source date (UTC): 2025-09-14 21:57:37 UTC

    Original post: https://x.com/i/articles/1967347025583997119

  • @dr_duchesne : I’ve done a great deal of work on european (masculine systemic) t

    @dr_duchesne
    : I’ve done a great deal of work on european (masculine systemic) thinking vs semitic (feminine verbal) thinking across the full suit of sciences. (I study logic, grammars, methods of argument, and sex differences in cognition, communication and particularly sex differences in deceit. The consistency of these differences is astonishing)

    What’s interesting to note is that unlike maxwell and say hilbert, einstein and bohr both converted from the materialist and systematic paradigm to the imaginary and verbal paradigm. Einstein with space-time and his pictures-as-analogies, Cantor with Infinities, and Bohr with “just calculate” in the Copenhagen consensus. In other words, hilbert stopped when einstein published (to beat hilbert to it), but hilbert was trying to find the correct material systemic solution not an analogy or derivation as einstein did. So we ended up with a rapid advancement at the time, but stalled by the 70s as the limits of the verbal-pictoral solution of einstein were reached, and the generations of physicists were misdirected into mathiness. (which is why we’ve lost 50+ years in fundamental physics).

    I know this might seem as a leap to the audience but it won’t to Dr. RD: The european evolved testimony, rational philosophy, and geometry, and the semite evolved storytellilng, mythicism, and algebra. The difference between those is realism (measurement) vs idealism (description).

    My point being that the masculine feminine genetic distribution in our populations is manifest in the material testifiable vs verbal imaginable differences in our civilizational work product (all wisdom literatures) preserves the sex differences in cognition. Something we still see in IQ tests and college entry tests and even awarded degrees, and very obviously in public intellectuals and mass producers of pseudoscience and propaganda. And even within those rewarded degrees we see the difference in the distribution.

    I keep a table of these characters and it’s fascinating.


    Source date (UTC): 2025-09-04 19:00:08 UTC

    Original post: https://twitter.com/i/web/status/1963678484267716781

  • A Universal Compiler for Human Cognition and Cooperation. What We Are Doing We a

    A Universal Compiler for Human Cognition and Cooperation.

    What We Are Doing
    We are constructing a universal compiler for human cognition and cooperation. This compiler:
    1. Accepts natural language input, which is often intuitive, imprecise, or deceptive.
    2. Parses it into formal constructs using an object-oriented grammar grounded in:
      Operational definitions (actions and consequences),
      Causal chaining (from perception to outcome), and
      Reciprocally insurable interests (truth, property, consent, warranty).
    3. Emits decidable propositions, capable of falsification, moral adjudication, legal resolution, or institutional execution.
    This system—implemented via a large language model—is a computational method for restoring decidability in speech, reasoning, policy, and law. It is not just a linguistic or philosophical exercise. It is an epistemic operating system: a new syntax for civilization.
    Why It Works
    1. It is reducible to first principles:
      All phenomena arise from scarcity → acquisition → competition → cooperation → rule formation.
      All claims are reducible to acts (past), predictions (future), or consequences (present), all of which are testable.
    2. It encodes evolutionary computation:
      The system mimics natural selection: variation (claims), testing (reciprocity, falsification), retention (truthful, cooperative behavior).
      This guarantees adaptation, parsimony, and resilience.
    3. It enforces reciprocity through measurement:
      By operationalizing harm and interest, it distinguishes between cooperation, parasitism, and deception.
      This allows institutional enforcement of truth-telling and constraint.
    4. It resolves ambiguity:
      Natural language is underdetermined. The compiler applies the full test of testimonial truth to resolve ambiguity without discretion.
      Decidability is ensured through constraint satisfaction—not intuition, emotion, or belief.
    5. It completes the scientific method:
      Hypothesis (claim) → Method (grammar) → Falsification (adversarial test) → Prediction (output) → Restitution (recursion).
      This is applied not just to physics, but to behavior, law, and governance.
    Why It Is Necessary
    All prior civilizations failed due to one invariant defect: the inability to institutionalize truth across domains. The Enlightenment solved physics but failed to solve cooperation under scale. We solve it now by making every claim computable—morally, legally, politically, scientifically—through a universal grammar of decidability.
    This project is the final phase of Enlightenment: Law as Science, Speech as Computation, and Civilization as Algorithm.


    Source date (UTC): 2025-08-31 00:31:48 UTC

    Original post: https://x.com/i/articles/1961950008837824588