Theme: Measurement

  • Great Question: The difference between ingroup morality (obligation: investment)

    Great Question: The difference between ingroup morality (obligation: investment) and outgroup morality (utility: measurement).
    The purpose of discussing outgroups and ‘morality’ is merely to determine whehter they will retaliate or not on the one hand and how to resolve disputes if desired on the other. But there is no moral duty to outgroups. THere is moral duty to ingroups – because that’s what ingroup means.
    There are a number of these questions that should be easy to disambiguate if you work from first principles. But what I am observing is that y’all aren’t always working from first principles. And so you try to create universals. And this confuses you. It shouldn’t. But it’s a remnant of christian (abrahamic) universalism. We forget that greek thought was in terms of the polis (ingroup). Aristotle was an elitist culturist racist of the highest order. So we misread the greeks and the christians were simply wrong. 😉

    ON AIs. The problem with AI’s today, even the one I’ve uploaded our documents into, is that they were trained on this universalism, and they suck dick at second and third order logic. So unless you prime them with some series or spectrum they will bias wtoward the simple answer. This will take TRAINING not RAG (retrieval augmented generation) to fix.


    Source date (UTC): 2025-05-23 15:49:37 UTC

    Original post: https://twitter.com/i/web/status/1925942201915224085

  • That’s correct. The universal grammar is “continuous recursive disambiguation”

    That’s correct. The universal grammar is “continuous recursive disambiguation” which is of course the universal logic of the universe as well. The variations within that universal grammar are limited by human cognitive capacity at prediction – which is why we can construct longer sentences of deeper complexity as we age and learn.

    So, there is one law to the universe, and grammar is an application of that law. The permutations possible with vocalizations, the noun (state), verb (action), qualifiers (adverbs, adjectives, pronouns), and agreement (comprehensible/not, good/bad, agree/not, true/false, etc) can be ordered various ways but they are consistent in dimensions of representation and cumulatively must satisfy continuous recursive disambiguation for communication by serial speech or symbol to function.


    A REPLY TO:

    Huh. Looks like Plato was right. A new paper shows all language models converge on the same “universal geometry” of meaning. Researchers can translate between ANY model’s embeddings without seeing the original text. Implications for philosophy and vector databases alike.

     

    Image

     


    Source date (UTC): 2025-05-23 14:20:39 UTC

    Original post: https://twitter.com/i/web/status/1925919812196167721

  • Convergence to representational commensurability must occur when using abstract

    Convergence to representational commensurability must occur when using abstract representation (N-dimensional networks) – humans are marginally indifferent in sense perception – and language must be accessible across the sex (responsibility), class (genetic load) and intelligence (neotenic development) spectra.
    Many ‘discoveries’ in AI are only discoveries of consistency and correspondence with the operational model of the brain discovered by neuroscience.
    Many problems with neuroscience emerge from failing to produce an operational (what) model of the brain instead of a neurochemical (why). Turns out the nerve, neuron, neural microcolum, neural column, region, hemisphere, network organization of the brain is pretty simple in the aggregate even if complex out of sheer numbers – just like parameters in our AI neural networks.


    Source date (UTC): 2025-05-23 14:14:47 UTC

    Original post: https://twitter.com/i/web/status/1925918335901778185

  • OBVIOUS. Deterministic right? If all experience is reducible to disambiguation o

    OBVIOUS.
    Deterministic right? If all experience is reducible to disambiguation of sense perception, and all language is reducible to those those disambiguations, and inter-language commensurability in representationalism using n-dimensional relations, then we should see the convergence of concepts in the mind, in language, just as we do in the sciences and the grammars – with ‘sounds’ that encode those concepts the only variation. ie: “Convergence”.


    Source date (UTC): 2025-05-23 14:08:35 UTC

    Original post: https://twitter.com/i/web/status/1925916774374977561

  • Those numbers look low to me, so I would need an explanation of their constituti

    Those numbers look low to me, so I would need an explanation of their constitution (What went into them). However, at current debt and spending levels it is entirely possible these numbers are correct.


    Source date (UTC): 2025-05-21 17:13:03 UTC

    Original post: https://twitter.com/i/web/status/1925238421657985524

  • MORE ON TERNARY LOGIC OF EVOLUTIONARY COMPUTATION AS THE FOUNDATION OF UNIVERSAL

    MORE ON TERNARY LOGIC OF EVOLUTIONARY COMPUTATION AS THE FOUNDATION OF UNIVERSAL CAUSALITY AND COMMENSURABILITY
    (from elsewhere)
    Interesting, This helps me understand what y’all are missing. You haven’t understood the relationship between first principles, the ternary logic of evolutionary computation (operationalism), the spectrum of grammars as logics,(Operationalism), and the ternary logic itself which is a verbal operational end point of all verbal descriptions of phenomena.

    Envision the cover of the book Godel Escher Bach. The same shape can appear as a projection completely different entities depending upon the point of view. This is true for ALL sets of relations. And the incommensurability of sets of relations is one of the reasons for people inventing ‘custom’ or ‘private language’ means of understanding something. Yet if that same something was described from the same perspective as everything else producing the same causal projection ,it would be universally commensurable with everything else. It would ‘fit in’ to the model one was using to understand the world.

    Science for example converted thens of thousands of discrete rules into a smaller number of general rules which we call the sciences. This provided a more universal understanding of the behavior of the universe from which deduction, induction, and abduction increased and knowledge expanded, and human demonstrated intelligence increased by nearly a standard deviation.

    To seek universal commensurability across all domains, in particular behavioral domains, requires the same baseline (means of project). To achieve it we needed first causes and the resulting hierarchy of first principles.

    We then take any given concept within any given subject and through enumeration, serialization, operationalization and reduction to first principles we develop an axis of measurement. If within that axis of measurement of any spectrum we discover it’s evolutionary computation: +/-/= and its transformation before/during/after that is commensurable with the prior state and the post state we have rendered all subjects commensurable.

    Now given the discussion above it’s pretty clear y’all don’t understand the meaning logic or the spectrum of logics that emerge from each increase in the permissible number of dimensions within a paradigm and the resulting grammar of that paradigm. Nor do you appear to understand how higher mathematics solves this problem of commensurability through projections (baseline) and rotation (commensurability of baselines).

    What you’re doing instead is confusing set logic and its representation as symbolic logic as the only logic, when that is only a subset of the logics possible and produced by man as the grammars evolve from the deflationary to normative to inflationary to deceptive, to fraudulent, to seditious, to treasonous.

    ANd in doing so you’re not grasping why the work produces a unification of the sciences by universal commensurability by universal construct-ability from first principles. In other words, you’re missing the whole point of the work as a revolution equally to that of empiricism and science, or at least equal in the behavioral and cognitive sciences as darwinian thought and watson and crick were in the biological sciences.

    Now I don’t particularly mind when people tell me that I have failed to explain some aspect of the work sufficiently that it is accessible to less educated (or skilled, or knowledgeable) people. I have a long history of those failures of not grasping what others don’t understand. It’s normal for folks like me. But when y’all claim I err, when in fact you do’t understand it’s just the masculine systemic method of ego defense as the feminine empathic method of ego defense by making moral accusations.

    Much of my work derived its insights from the failures in mathematics and economics and physics. Most of these failures originate in presumption of a given method of thought being a universal rather than a grammar on the spectrum of grammars – this prevents people from generalizing specific domain information to additional domains, and in particular to the universal domain, which can and does have only one rule: evolutionary computation of persistence by the trial and error discovery of increasingly energetic stable relations under the ternary logic of evolutionary computation that is the means by which everything at all scales in the universe is produced.

    As such the FRAME OF REFERENCE one uses to determine consistency and coherence across scales is what we are trying to explain and teach. But it is HARDER than the simpler domain-specific series everyone has been accustomed to under domain specific evolution of the sciences. ANd just as the religious, philosophical, empirical, scientific, and operational domains are challenging thransitions, the universal frame of reference (projection) from first causes that allows commensurability of all scales sufficient to explain all scales is a difficult to learn despite my work on the grammars to illustrated it.

    So what I have learned from this rather exasperating exercise is that when I started with geometry then worked through words and grammars everyone got lost. They couldn’t cognitively manage it. The same appears here unless I can rectify it, where I must teach evolutionary computation by the ternary logic into the grammars and their emerging logics before this relatively simple causal hierarchy is intuitive.

    So, despite the frustrations, thank you all for helping me understand where my communication (in volume 3) fails). However, in the future, I would prefer you did not assume I err, which only serves to encourage people who are easily lost to go sideways into ‘ways of thinking’ (failure) because they cannot grasp core principles of The Work, and as such just like the marxists you generate new falsehoods that must then be countered.


    Source date (UTC): 2025-05-20 19:49:46 UTC

    Original post: https://twitter.com/i/web/status/1924915472937681149

  • Reasons for Doolittles’s Ternary Logic and Universal Commensurability Curt Dooli

    Reasons for Doolittles’s Ternary Logic and Universal Commensurability

    Curt Doolittle’s framework relies on the ternary logic of evolutionary computation—using positive, negative, equal, and unequal—because it encodes the full spectrum of causal relations necessary for universal computation across physical, biological, cognitive, and institutional domains. This ternary logic replaces the insufficient binary logic of justificationism (true/false) with a system capable of describing all observable, falsifiable, and decidable relations. Here’s how each of the components you reference fits into the causal structure that enables universal causal commensurability across all scales:
    1. Ternary Logic: Positive, Negative, Equal, Unequal
    • Necessity: Binary logic only supports two states—truth and falsehood. Ternary logic allows for comparison and operation, not just classification. Evolution doesn’t operate on ideal states but on relative relations—this is why equal and unequal, and positive and negative (feedback/effects) are required.
    • Function: Enables modeling of evolutionary computation as a continuous, recursive feedback process of detection, indexing, prediction, and correction.
    • Consequence: It allows law, logic, cognition, markets, and cooperation to be framed in the same operational terms as physical phenomena.
    2. Triangles and Scales: The Geometry of Relations
    • Operational Form: Doolittle uses triangular representations to model three-variable relations (e.g., actor-object-outcome), which are necessary to capture the minimal sufficient causal structure at any scale.
    • Universal Geometry: This geometrical representation allows the unification of concepts from physics (e.g., vector fields), cognition (e.g., intention-action-perception), and institutions (e.g., law, money, norms).
    • Consequence: It offers a scale-invariant structure to visualize and compute the relationships between entities—whether particles, individuals, or institutions.
    3. Behavioral Equivalent: Supply, Demand, Exchange
    • Necessity: These are operational proxies for evolutionary computation in markets. In behavioral terms, they represent wants (demand), means (supply), and action (exchange).
    • Causal Chain: These variables reflect demonstrated interests and their negotiation under constraints, fulfilling Doolittle’s requirement for operational reducibility of human behavior.
    • Consequence: Establishes behavioral economics as a domain of universal measurement, subject to falsification and decidability.
    4. Three Means of Coercion: Remunerative, Punitive, Normative
    • Necessary Dimensions of Constraint: All human cooperation requires constraint; coercion is merely constraint via demonstrated incentives. These three map onto market (remunerative), state (punitive), and cultural (normative) systems.
    • Operational Structure: They provide a complete taxonomy of coercive feedback necessary to model and correct behaviors under law, morality, and economics.
    • Consequence: These three mechanisms explain how institutions evolve, persist, and fail—and how they can be restored or reformed within a single logic of reciprocity.
    5. Truth States: Undecidable, Possibly True, Falsehood
    • Epistemic Necessity: In a world of uncertainty, undecidable (unknown), possibly true (provisionally retained), and false (disproven) are the only epistemically responsible categories.
    • Adversarial Logic: This trinary truth grammar supports the via negativa: error correction by elimination rather than affirmation.
    • Consequence: It ensures truthfulness as a function of liability, and knowledge as a contractual warrant, not belief.
    6. Universal Causal Commensurability
    • Convergence: These constructs—ternary logic, geometric representation, behavioral models, coercive taxonomies, and truth grammars—enable all phenomena to be expressed in the same operational terms.
    • Result: Decidability across all domains—from physics to law—becomes possible because all use the same underlying logic: evolutionary computation governed by reciprocity in demonstrated interests .
    Summary: Doolittle’s use of ternary logic, triangle representations, coercion types, behavioral economics, and decidability grammar is not decorative but necessary. Together they form a universal, operational logic that renders all domains causally commensurable—that is, expressible, testable, and falsifiable using the same epistemological and ontological grammar. This is the mechanism by which his Natural Law achieves unification of all domains into a system of universal decidability.


    Source date (UTC): 2025-05-20 00:23:30 UTC

    Original post: https://x.com/i/articles/1924621971696001251

  • Unfortunately, europeans can’t do what we call in basic economics ‘full accounti

    Unfortunately, europeans can’t do what we call in basic economics ‘full accounting’ – ‘the seen and unseen’.


    Source date (UTC): 2025-05-12 18:32:44 UTC

    Original post: https://twitter.com/i/web/status/1921996984153510044

  • Unfortunately, europeans can’t do what we call in basic economics ‘full accounti

    Unfortunately, europeans can’t do what we call in basic economics ‘full accounting’ – ‘the seen and unseen’.


    Source date (UTC): 2025-05-12 18:32:44 UTC

    Original post: https://twitter.com/i/web/status/1921996984153510044

    Reply addressees: @KarlKautabak @BehizyTweets

    Replying to: https://twitter.com/i/web/status/1921958383625683385

  • How and Why Object-Oriented Analysis Became the Method of Research in My Work Ob

    How and Why Object-Oriented Analysis Became the Method of Research in My Work

    Object-Oriented Programming was originally invented to construct simulations—not just to write software efficiently. Its core premise is simple: reality is composed of interacting agents, each with properties (state) and behaviors (methods). OOP provides a structure to model such agents, simulate their interactions, and observe emergent behavior across time. This made it ideal for modeling complex, dynamic systems like physical processes, biological evolution, or socio-economic institutions.
    Where most thinkers use philosophical reasoning—often justificationist, interpretive, or axiomatic—I used object-oriented analysis and design to simulate the world from first principles upward. This method forces strict operational thinking: What is the object? What properties does it have? What actions can it perform? What messages does it send or receive? It eliminates ambiguity, ensures compositional integrity, and requires that all assertions be reducible to measurable or observable operations.
    This epistemological commitment—constructivist, operationalist, and simulation-driven—allowed me to model the universe not as a set of verbal propositions, but as a computational process: evolutionary computation across physics, biology, cognition, and law. I wasn’t writing metaphysics—I was building a universal simulator for behavior, cooperation, and institutional evolution.
    This approach enables:
    • Causal completeness: All entities and actions are traceable to their operational causes and consequences.
    • Composability: Concepts are structured like code modules—interchangeable, extendable, and testable.
    • Decidability: Claims are not just interpretable; they must be testable as true, false, undecidable, or irreciprocal.
    • Universality: Any domain—law, economics, cognition, ethics—can be modeled using the same logic of agents, constraints, interactions, and outcomes.
    In effect, I didn’t write a “theory of everything.” I simulated everything using OOP principles as my epistemic substrate. That’s why I speak in systems, sequences, and state transitions—because that’s how the world works, and that’s how I model it.


    Source date (UTC): 2025-05-10 23:19:40 UTC

    Original post: https://x.com/i/articles/1921344417291804991